By studying Nature’s biological materials and their impressive multifunctional properties and structures, we can better understand how to develop sustainable material ecosystems. This urgent mission can be accelerated and assisted with scientific generative AI tools.
Nature has severely outpaced humans in developing multifunctional, hierarchical materials that access impressive material properties, all the while being fully degradable and part of native ecosystems. But how can we effectively model the intricate time and length scales in biological systems to translate design principles to meet engineering demands? We postulate that generative artificial intelligence (AI) can play a crucial role in solving this interdisciplinary challenge, translating insights across disparate knowledge domains and forming the basis for a new sustainable materials economy. Techniques like generative adversarial networks, transformer neural networks, and diffusion denoising modeling have been used to solve complex bio-inspired design challenges. Emerging tools such as multimodal large language models provide robust foundations with reasoning abilities that can be multiplied if connected into multi-agent systems with access to first principles (e.g., Density Functional Theory, molecular dynamics, coarse-grained simulations) or physical experiments (e.g., autonomous manufacturing, universal material/mechanical testing). Through this approach, generative AI can not only address complex forward and inverse tasks but also develop ontological knowledge that offers interpretability using graph theory, facilitating the translation of knowledge across diverse scientific domains. This approach empowers critical thinking essential for addressing contemporary global environmental challenges. The emerging next generation of AI systems surpasses limitations imposed by original training data, actively exploring new understanding. Bio-inspired generative AI notably widens the design space, fostering natural scientific discovery while departing from cyclical human-centric design, pushing the frontier of biomateriomic research.
Keywords: Biological materials; bioinspiration; generative materials design; large language models; transformer neural networks; sustainability
Financial Support: This material was supported by the MIT Generative AI initiative. This work was supported by the Army Research Office (W911NF1920098 and W911NF2220213), ONR (N00014-19-1-2375 and N00014-20-1-2189), and USDA (2021-69012-35978). Further support was provided by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064 as well as the MIT-IBM Watson AI Lab.
Creating sustainable materials to replace those derived from fossil fuels is crucial to building sustainable economies. The current materials design approach, reliant on nonrenewable resources, poses environmental sustainability risks, causing greenhouse gas emissions, pollution, and waste accumulation.[1],[2],[3] Furthermore, the manufacturing of these materials generates synthetic, toxic, and nondegradable substances that endanger public health, raising ethical questions about how we leave the environment to future generations.[4],[5] Hence, it is imperative to design sustainable materials that minimize environmental degradation and decrease energy costs, all while ensuring the materials meet the necessary mechanical standards for their intended applications. To that end, we seek inspiration from nature to design multifunctional, hierarchical materials that are sustainably manufactured and processed, often using paradigms distinct from those in human-made materials. Many biological materials offer powerful solutions that combine disparate material properties. For instance, in synthetic materials, there is often a trade-off between stiffness and high extensibility. On the other hand, natural materials have evolved hierarchical structures that yield optimal properties and sustainable processes when they reach the end of their life cycle.[6] Despite extensive human efforts to engineer strong, tough multifunctional materials, biological materials often still outperform synthetic materials. The traditional process of developing novel materials with tailorable properties is a time-consuming, labor-intensive, and expensive process. As a result, the incomplete human material ecosystem shies in comparison to the extensive and complex biological material ecosystem that has been refined through eons of evolution (Figure 1).
When implemented, bio-inspired design can lead to innovation that challenges the status quo. A few examples include whale fins for wind turbines;[7] gecko feet for robotics or adhesives;[8],[9],[10],[11] and structural engineering applications like the Gherkin Tower (London, England), Eastgate Center (Harare, Zimbabwe), and Esplanade (Singapore, Republic of Singapore), which draw inspiration from the structural arrangements in biological materials to improve performance.[12] The Gherkin Tower mimics the lattice-like exoskeleton of the Venus’ flower basket, dispersing stresses from strong water currents and aiding in reducing wind deflection.[13] Similarly, the Eastgate Center takes inspiration from mound-building African termites, integrating a thermoregulation system. These examples underscore the remarkable ingenuity of nature for structural design, not to mention other incredible multifunctional abilities in self-healing, sensing, adhesion, and optics with applications spanning from sustainability to medicine.[14]
Biological materials have developed remarkable hierarchical structures across the nanoscale to the macroscale (Figure 2a). Interdisciplinary experts have extensively studied materials such as conch shells, nacre, bamboo, horse hooves, and spider silk, to name only a few.[15],[16],[17],[18],[19] Notably, nacre has been highly investigated for its impact resistance, fracture toughness, and ballistic performance. Spider silks exhibit phenomenal strength-to-weight ratio, flexibility, and diversity (with different species producing silks with varying properties), offering promising avenues for applications across materials science and medicine.[20],[21],[22] Furthermore, the study of interfaces in natural structures like shells, lobster exoskeletons, antler bones, and silica sponges can guide the engineering of materials capable of averting catastrophic failures.[23],[24],[25],[26],[27] In addition, insights into materials resistant to torsion, bending, and buckling can be gleaned from the examination of plant stems and porcupine quills.[28],[29] Even more impressive is the fact that these biological materials are crafted from a few basic, organic components such as cellulose, keratin, and chitin[30],[31],[32] and, at the ultimate level, from weak chemical interactions such as H-bonding (Figure 2b). Despite relying on a limited set of resources, nature achieves staggering biodiversity. For context, only a small fraction of the world’s biodiversity (an estimated 8.7 million eukaryotic species) has been identified, leaving a vast amount yet to be explored.[33]
The advancement of science-focused generative artificial intelligence (AI) enables a more efficient exploration of biological systems, paving the way for innovative bio-inspired materials and streamlining the translation from scientific insights to scalable engineering applications.[34] What sets generative AI apart from other techniques is the ability to generate new data even in situations in which the underlying system is highly complex: multiscale, multimodal, and deeply interconnected.[35],[36] Generative AI excels in navigating and enabling the rapid and nuanced creation of diverse data forms, such as text, sequences, graphs, structural designs, and imagery. To illustrate the need for elaborate modeling systems, consider the field of protein design, for which simple alpha helices and beta sheets are universal building blocks that assemble to form an immense diversity of structures, a concept known as the universality–diversity paradigm[36] (Figure 2c). These complexities are furthered when one considers the stark variety in both time and length scales (Figure 2d). These complexities can be addressed using advanced AI systems with more robust and realistic generation techniques. We postulate that generative AI can play a crucial role in accelerating the rate of innovation by:
Learning from nature: Support research efforts in elucidating the complex hierarchical composition, structure, and property relationships found in biological materials.
Discovering nature: Discover new, yet unexplored, biological materials and systems.
Designing inspired by nature: Generate tailorable and optimized bio-inspired designs significantly at less cost and in less time compared to traditional materials design processes.
Connecting domains of knowledge: Hypothesize efforts in bridging across scientific knowledge to bio-inspired engineering solutions, specifically allowing the transfer of ideas across domains and modalities, and facilitate the development of AI systems that can be used to solve complex multiscale, multimodal boundary value problems.
Generative AI as applied to biological, living, and bio-inspired systems holds significant potential but also still faces inherent complexities. For example, generative AI must adapt to the unique rules found in nature, necessitating novel tools for discovery beyond traditional human methods. To address these challenges, we must engage diverse disciplines to encourage the exploration of unconventional ideas and methodologies to increase cross-fertilization of knowledge.[37] Generative AI, when informed by insights from biology or ecology, can hence devise more effective strategies for materials discovery, leading to breakthroughs that benefit multiple fields and helping to address environmental issues. Here, the convergence of disciplines holds profound significance in unlocking novel frontiers, especially in the realm of generative AI for bio-inspired materials design.
Generative AI has evolved from early ideas about creating intelligent machines in the mid-twentieth century to quantitative tools that complement other numerical and computational approaches. Early research focused on symbolic or analytical approaches as well as basic neural network models. Now, research has evolved into a flexible arsenal of tools and techniques with training strategies that meet complex data availability constraints. The evolution spans from technical advancements in unstructured machine learning to more expressive structured strategies based on graph models, which help to capture complex multiscale biological and bio-inspired material behaviors to solve forward and inverse modeling problems. These generative tools aid, for instance, in developing novel designs for architected materials inspired by biology or coming up with new molecular mixtures for next-generation solvents. Previous work also includes studying leaf venation patterns for architected composites, graded material design, new spider silk sequences for synthesizing silk with optimized properties, filament-based ultra-light-weight web structures inspired by spider webs, and structure–property relationships from known proteins for de novo discovery. As visualized in Figure 3, in this section, we chronicle a rough timeline of generative AI techniques, example research, and the adaptations that led to emerging work in using transformer models, including large language models (LLMs), as well as multi-agent systems to combine not only disparate scales but also disparate types of machine learning and other multiscale modeling tools.
A critical spark in generative AI can be seen as the use of variational autoencoders (VAEs)[38],[39] and generative adversarial networks (GANs),[40] emerging around 2013-2014. These types of generative strategies find applications to develop new designs or to amalgamate design cues from different naturally occurring materials or solutions to problems. VAEs use two neural networks, an encoder, and a decoder that perform dimensionality reduction to create a latent space of encoded data that can be of great use. The VAE latent space can be mined for novel generated data. In one study, the VAE was used to optimize cantilever structures, ultimately obtaining low-density designs that can explore structural design motifs found in nature.[41] In other work, reinforcement learning was used to generate bio-inspired designs.[42]
GANs are a prominent early approach to generative strategies, such as in the conditional prediction of stress or strain fields from microstructure data.[43],[44] GANs use a generator and a discriminator, but they are trained in an adversarial manner until the generator can create samples similar enough to the training data to fool the discriminator. The resulting generative models can be used, for instance, to expand on natural designs and come up with new microstructures that have not yet been identified in evolutionary processes. In a bio-inspired material study, images of leaf venation structures were used to train a GAN to generate new structures, which were tiled to generate quasi-2D materials that behaved like open-celled foams[45] (Figure 3a).
In 2017, the introduction of the transformer architecture in the seminal paper “Attention Is All You Need” marked a pivotal shift toward not only remarkable capabilities in natural language processing techniques[46] but also modeling complex systems with long-range order, such as multiscale hierarchical biological materials[47],[48],[49],[50] (Figure 4a-b). The technique has emerged as an important architecture due to the underlying graph-based strategies that yield structured learning. The self- (and/or cross-) attention mechanisms allow models to understand how different parts of the input data relate to each other. With this, hierarchically stacked graphs of interactions are built to comprehensively complex mechanisms across a hierarchical multiscale system (see visualization in Figure 4b-c). In a transformer architecture, each connection in the graph is not just a binary link but also has a weight associated with it, including a delineation of the direction. This weight signifies the degree of relevance, or attention, one element has with respect to another.
In attention models, the input data is provided as a set of building blocks, whereby their initial ordering or relationship (e.g., whether they are ordered sets of words, an amino acid sequence, pixels in an image, etc.) must be provided to the model as input (Figure 4d). For instance, this can be done by positional encoding to mark the order of input, either in 1D (e.g., text, sequences, symbols, etc.) or 2D/3D (field data, multiple frames, etc.).[50] These examples exemplify one of the powerful aspects of transformers to be able to natively handle multimodal data. One of the reasons why these models work well for scientific problems is that, unlike other sequence-based strategies, for instance, that process sequences in a fixed order, the attention mechanism in transformers models interactions in a global context. Physics-inspired variations of the architecture include the incorporation of convolutional operators to model hierarchical embeddings to model relationships at different resolutions, akin to graphs with different numbers of edges[49] (Figure 4c).
The flexible approach makes transformer models incredibly powerful for diverse multimodal tasks (language, symbols, numerical data, etc.) and finding patterns across multiple modalities.[51] A specifically powerful feature of this class of models is the pretraining capacity, where lower quality data can be used to endow models with fundamental knowledge. This knowledge is then utilized to train the model to solve specific problems in a secondary finetuning stage, typically requiring much less, but higher quality, data. Philosophically, pretraining and finetuning represents a manifestation of what is known as the university–diversity paradigm in bio-inspired materials, where universal knowledge is learned during pretraining of the attention operator that is then adapted, or finetuned, to solve a specific set of problems.
In a study of material fracture mechanics, building block–based microstructure representations were employed in an attention-based neural network model, FieldPerceiver, which uses multi-headed attention that captures short- and long-range order to predict complex materials phenomena such as singular stress and displacement fields in fracture problems.[48] This long-range capability is vital for capturing hierarchical relationships like those seen in biological materials. Aside from building block–based microstructures, the transformer architecture enabled language techniques that can leverage text for designing bio-inspired materials. Particularly, transformer-based generative AIs can be built on top of vector-quantized GANs[52] that use discrete linguistic models to facilitate the generation of 2D or 3D data like microstructures or physical fields, including the ability to capture materials failure dynamics[53] (Figure 2d).
Throughout civilization, humans have designed materials by envisioning an idea and illustrating them through sketches, for instance. Central to this historical journey of material design lies the intersection of language, cognition, and tangible expression. This text-to-material paradigm has been explored in a study that combines transformer-based models, computational simulations, and 3D printing to realize the translation from words to architected materials.[54] Empowered by transformer-based models trained originally for image or text generation, an approach was showcased in designing bio-inspired materials using simple text inputs,[55] a 3D-printed example of which is shown in Figure 3b. Created using a similar method, Figure 3c and Figure 3d show examples of a flame-inspired design[56] and a diatom-inspired material.[57] The latter examples illustrate the use of generative methods to traverse modalities by which concepts are realized as matter.
On the nanoscale, proteins are some of the most fundamental macromolecules in defining life, including essential biomaterials like bone, muscles, and skin. Here, the specific interest lies not only in structure prediction but also designing proteins that meet a specific set of properties. Emerged from nature, proteins present an elegant yet complex and rich design platform where highly dynamic mechanisms govern key properties. The various functions and outstanding properties of proteins can be attributed to folded 3D structures, encoded by 1D sequences.
Generative AI enabled the study of proteins in both the forward and inverse modalities. In forward predictions (e.g., predicting structures of existing proteins via AlphaFold2[58] and RoseTTAFold[59],[60],[61]), researchers have developed end-to-end models that predict structural features (e.g., secondary structure type and content,[62],[63],[64],[65] binding sites,[66] and surfaces[67]) and properties (e.g., solubility,[51],[68],[69] melting temperature,[70] natural vibrational frequencies,[71],[72] and mechanical strength[73]). For inverse design problems, the goal is typically to create new protein designs to meet desired properties; efforts combined forward predictors with search algorithms to explore the vast design space of proteins for desired structure and property targets.[74],[75]
An important aim is de novo protein design. This involves generating undiscovered protein sequences that meet desired structure and function, a task historically considered immensely challenging.[76],[77] Extensive laboratory attempts have accumulated vast data regarding proteins, and now with the transformer architecture, a pivotal opportunity to develop bio-inspired design exists by integrating this data into a predictive design tool. For instance, a method was developed for generating de novo proteome-inspired molecular structures using a combination of transformer- and GAN-based architectures.[44] The model ultimately produced a voxel-based representation of molecular structures based in natural proteins, with a particular emphasis on the effects of 3D shapes of proteins on mechanical properties (Figure 3e).
In 2020, generative diffusion models emerged, rooted in the principles of nonequilibrium thermodynamics, gaining popularity through tools such as Stable Diffusion and DALL-E. Diffusion models leverage denoising diffusion processes to generate high-quality and realistic data, specifically in multimodalities such as text conditioned image synthesis. These types of models offer powerful strategies to generate conditioned solutions to a variety of inverse problems. When combined with attention models, diffusion models can not only capture long-range relationships in data but also implement intricate conditioning mechanisms to meet target properties, or boundary conditions.
With diffusion-based generative models,[78],[79],[80],[81] the directed design of novel functional proteins was also furthered. These models provide a direct map from the desired characteristics to potential designs, introducing an impactful paradigm for biomaterial designs.[82],[83],[84],[85],[86] To study protein structure-to-sequence designs, attention-based diffusion models were used to predict amino acid sequences based on secondary structure design objectives in sequence/residue levels.[87] After training on experimental data, generative protein diffusion models can robustly, efficiently, and accurately generate proteins, many of which are de novo, with desired secondary structures. For challenging property-to-sequence design tasks, there is often less property data available compared to that of structures or sequences. However, it has been shown that protein language diffusion models[88] can be developed to handle this challenge, in which the protein language model pretrained on sequence data provides an expressive and efficient embedding space. The diffusion model then learns to perform generation inside this embedding space. Trained on a force-unfolding dataset derived from full-atomistic molecular modeling, this model can design de novo proteins that fulfill a complex set of targeted mechanical properties (Figure 3f), including unfolding energy, mechanical strength, and unfolding force-separation curves, offering a rapid pathway to discover protein materials with superior mechanical properties.
Generative methods can solve inverse design problems at much larger scales, as shown in studies of spider silk and webs. Spider webs are remarkable biological structures, characterized by hierarchical architectures showcasing impressive structural performance and mechanical properties (e.g., lightweight but high strength).[89] Yet, the understanding of the structure–property relationship in spider web structures remains limited, arising from the structural complexity across multiple scales and the diversity in both spider web types and silk protein types. Additionally, the scarcity of web data and quantified web properties has constrained research. Thus, advanced tools were utilized for newly generated available datasets[22],[90],[91],[92] to design and analyze spider webs at various scales.
Generative models have been developed for designing both spider webs and spidroin sequences.[22],[92] To design synthetic spider web–like material architectures on the macroscale, an extensive analysis of the heterogeneous structures of spider webs was conducted, using diffusion and autoregressive modeling with key geometric parameters as conditions.[22] Before training, inductive representation sampling was applied to native spider web graph data[90] to generate smaller subgraphs, capturing spatial heterogeneity and improving the construction of local features. Specifically, the study introduced and compared the generative performance of models with varying architectures and neighbor representations. Furthermore, an algorithm was employed to assemble web-based designs according to a series of geometric design targets, generating a gallery of designs. Then, selected web-inspired designs were 3D printed and tensile tests were conducted to assess mechanical properties (Figure 3g-h).
Attention-based diffusion models have been particularly useful to solve forward and especially inverse problems.[93],[94] As mentioned, previous modeling attempts with VAE and long short-term memory approaches required multiple iterations and a resource-intensive genetic algorithm to solve the forward problem. However, with the advent of diffusion modeling, a subsequent study was conducted on hierarchical honeycomb structures for compressive strength (Figure 3i). The attention-based diffusion model architecture yields a reversible and consistent method to relate the four-level hierarchical architecture to associated nonlinear compressive material behavior. By training a forward model to make predictions on stress-deformation curves based on the microstructure, there was a strong statistical correlation between the ground truth and the predictions. A separate inversely trained model also produced microstructures that were consistent with the stress curves input and were validated with coarse-grained molecular dynamic simulation and experimental studies.
Similar methods were used to design a flexible language-based framework to discover complex chemical designs, represented using encoded molecular structures.[95] Particularly, a series of deep learning models were trained on quantum mechanical properties of molecules and a newly curated dataset for deep eutectic solvents, which are solvents that can advance sustainable synthesis and can help with the processing of bio-inspired hybrid materials.[96] The main model was successful in proposing multiple de novo deep eutectic solvent compositions (Figure 3j). In other work, researchers developed hierarchical bio-inspired materials and used diffusion model to design, in a single shot, material microstructures that meet a certain mechanical design target (Figure 3l).[97]
Language is a general symbol-based form of interaction at the root of significant parts of human and nonhuman communication. Highly performant transformer models have been shown to have powerful capabilities across a range of tasks,[98] where decoder-only transformer language models were scaled to very large parameter sizes (billions to hundreds of billions of neurons) and trained on vast datasets (Figure 5a), with prominent examples including OpenAI’s GPT models or DeepMind Gemini. These models became known as LLMs for their ability to generate human readable dialogue-like text and more. LLMs have exhibited high levels of reasoning, ‘creative’ thinking, and the ability to connect knowledge across domains, specifically when pretrained LLMs are further finetuned in scientific subjects. Of great significance is the availability of high-performance open-source foundation models such as Llama 2 and Mistral/Mixtral as well as the newer development of small LLMs (e.g., Phi) dedicated to research purposes.[99],[100]
As an example, BioinspiredLLM was developed by training it against a corpus of fundamental research articles in the field of biological material mechanics.[101] Over a thousand articles in the field of biological materials were obtained and used to train the model. BioinspiredLLM has shown strong capabilities in knowledge recall, hypothesis generation, and assisting with research tasks that can significantly accelerate progress and discovery. For example, BioinspiredLLM hypothesized the mechanical behavior of a biological material that was not captured in its training set, and the hypothesis successfully matched the findings in a more recent experimental study. BioinspiredLLM has also shown high levels of creativity when prompted for bio-inspired design ideas such as for algae-, feather-, and coral-inspired designs.
Another development was MechGPT, an LLM fine-tuned on a textbook on materials and multiscale methods,[102] leading to the framework shown in Figure 5b. MechGPT excelled at knowledge retrieval and connecting knowledge across disparate domains, such as mechanics–biology and failure mechanics intersecting with, for instance, art, to show connections between disparate areas of knowledge. To that effect, MechGPT also extracted structural insights from ontological knowledge graphs, which help with providing frameworks for new research questions and to identify new relationships.
Retrieval-Augmented Generation (RAG) methods can be an effective strategy to reinforce knowledge claims, becoming a widely used approach for LLM knowledge recall or retrieval of new data sourced from simulation or experiment. For RAG queries, the retrieval system first searches a vector-embedded database for information relevant to the given query. That information is then fed as context for the LLM during generation of the answer, which also enables traceback of information used to generate answers to specific sources. This strategy also provides a way to add context from other knowledge fields and thereby intersect different areas of expertise.
Despite the potential of standalone LLMs, issues persist, such as occasional errors and the need for substantial alignment efforts. Working with large models can be difficult and require strong compute power for fast inferencing, unless hosted via an application programming interface (API) or through special-purpose software (e.g., vLLM, llama.cpp, LM Studio).
To build robust, diversified, and responsive feedback systems, researchers have been exploring multi-agent AI systems. Herein, the concept of a generative step, such as asking a question to an LLM and getting an answer, becomes a small step in a more complex development of the answer. In human ‘thought’—or step in a mathematical proof or time step in a computational simulation—the generation of a solution is not done in a single step but rather derived through multiple interacting thoughts that ultimately yield the solution. Harnessing multiple LLMs can create an adversarial feedback system that may be transformative for a broader impact of AI in science. In such a setting, LLMs can be used in an agentic framework to realize adversarial generative architectures, as done in a recent study in which bidirectional translations between disparate representations (e.g., music to proteins and vice versa) were discovered using an agentic network of four language models that were adversarially trained.[103]
Collaborative human-in-the-loop AI frameworks have been proposed as led by a human agent, using BioinspiredLLM in tandem with Stable Diffusion, a diffusion-based text-to-image model. In this scenario, a user interfaces with BioinspiredLLM to create and refine text prompts, and Stable Diffusion is incorporated to generate the design in 2D image format, which then can be used by the human user to extrapolate into a 3D design (Figure 3k). Taking this a step further involves implementing multiple LLM agents in a human-out-of-the-loop system (Figure 5c). MechGPT can be interfaced with another LLM agent that is an expert in another subject, such as biology or music. The agents are shown to collaborate to solve knowledge problems and generate experimental ideas, expanding the horizon of knowledge by combining disparate knowledge domains (Figure 6a).
These agents can be data informed by grounding them using theory and simulation for direct first-principles data generation, where LLMs are not only used to answer queries but also write code to solve a task that is then used to solve a complex question. The MechAgents framework, for instance, explores the potential of solving mechanics problems via LLM-powered multi-agent systems that specifically involve planning strategies.[104] Specifically, a multi-agent team was shown to self-correct and apply finite element methods to autonomously solve classical elasticity problems and successfully integrate generative AI methods with classical numerical modeling. This framework shows the potential of synergizing language models’ intelligence, the reliability of physics-based modeling, and diverse, dynamic collaboration. It opens new avenues for automating engineering problem-solving, generating data, and integrating knowledge, fostering human–AI collaboration.[105]
The interpretability of a model is crucial for comprehending generated results and assessing the model’s behavior, particularly in the context of real-life tasks with intricate relationships.[106] In MechGPT,[102] the utilization of ontological graph strategy enhances interpretability, where it has been observed that LLMs are effective in extracting structural insights through Ontological Knowledge Graphs.[105] These interpretable graphs, with nodes representing entities and edges indicating relationships, establish a knowledge framework, offering explanatory insights across domains and providing visual representations of various questions and topics (Figure 6b). The graph-forming interpretable strategies improve the accuracy and consistency of shared information, leading to more effective generation tasks. The application of generic graph-structured data presents another method to enhance model interpretability, which was implemented for spider web generation and prediction tasks,[22],[92] compared with images or domain-specific graph representations.
As advanced models become more robust, the associated resource demands become increasingly challenging. Training a large model from scratch is extremely computationally expensive. Therefore, finetuning techniques are invaluable. For instance, there are large models such as GPT-4, and there are also smaller open-source suites of models such as Llama 2[107] and Mistral 7b.[108] There are also models of small size like the Phi series of models[109] and reasoning optimized Llama-based Orca-2[110] that distill abilities from larger models. On the other hand, Mixtral is a mixture of experts model that involves multiple expert agents.[111] There are many interesting developments in this field to scale smaller models into assemblies of larger systems with increased performance, all of which are active areas of research.[112]
Finetuning can be complemented by pruning models via quantization techniques.[113],[114] This allows researchers to make models smaller, eliminating full numerical fidelity of unnecessary parameters. Quantization is a powerful strategy to make LLMs accessible to a wider range of researchers both for training and inference. Additionally, in the future, the quantization process may be automated where models can be used to develop meta-cognition for self-improvement where architectures can self-evolve toward better performance.
The active work presented here only scratched the surface of possibilities enabled by generative AI tools for bio-inspired materials and related areas. This moment represents a turning point from single-shot predictions to constructing systems composed of multiple interacting thoughts, culminating in nonlinear generated agentic solutions that may alter the way we approach scientific progress and discovery (Figure 7a). Various developments, especially in higher efficiency pruned models, multi-agent models, and the development of small but high-reasoning models, have led the way to provide adaptations for bio-inspired generative LLMs and many variants useful for science.
Executing collaborative research involves studying both human-in-the-loop and human-out-of-the-loop systems, aiming to produce customizable bio-inspired materials. Specifically, designing hierarchically constructed AI systems, akin to biological materials, makes use of assembling small models into a complex and robust system (Figure 7b). Starting with agents that are finetuned with various simulation methods will allow for retrieving information at different length scales and via distinct strategies to construct complex models from small model building blocks. However, as the quality and variety of finetuning resources play critical roles, it is important to understand how alignment and productivity scales with the number of agents and human involvement. Simultaneously, the exploration of biological and bio-inspired materials will contribute to advancing natural scientific knowledge, outside of human-centric practices, and strengthen the connection to engineering solutions. Using multi-agent systems can elevate the level of creative thinking in scientific applications and, if successful, can shift the paradigms of scientific discovery. The learning and adaptation inherent in multi-agent systems will prove advantageous to tackle increasingly more challenging and complex engineering problems such as bio-inspired materials design.
Data mining can be used to extract more from legacy literature, not only text but also figures, symbolic expressions, derived equations, and tabulated data. Using emerging cutting-edge optical character recognition tools and data extraction methods, models can be trained with high-quality data and use these detailed digitized documents.
In the realm of multi-agent and multi-modal systems, physical autonomous experimentation platforms for sustainable programmable materials discovery are important areas of study[115] (Figure 7c). To physically fabricate bio-inspired designs, a 3D printer that prints with degradable, biotic material[116] can be used. Future iterations of this printer may feature automation for improved print quality and control and upgrades enabling multi-material capabilities that aim to better emulate biological composite materials. To bridge computational predictions with real-world experiments, the desire is to integrate this physical fabrication system into a multi-agent system. In this system, computational agents can suggest optimal material designs while a physical robotics system oversees fabrication using the printer and conducts materials characterization; data is fed back and next steps are planned by other agents.
The type and quality of data used to train generative AI models is crucial. It is recommended that training corpora should be tailored toward trusted, relevant, and cohesive sources. As challenging as it is to obtain access to full-text data of publications, datasets should ideally consist of a full set of entire texts of peer-reviewed works so that an entire document can be properly understood. It is crucial to recognize the need to capture diverse datasets that represent different ideas, perspectives, and sources when creating generative AI models. Open source allows for the transparent sourcing of data sources and knowledge bases and can be adapted by users to meet certain needs. Scientific publishers could explore new partnerships to support generative AI developments by introducing the necessary infrastructure to deliver text/data mining materials, lowering the barriers for researchers to extract high-quality data. This mission can be supported by publishers providing universal API formats (i.e., one generalized API format that can be used across publishers) and preparing cleaned datasets in anticipation of handling big data tasks, including chemical compound names, equations, patents, records, and figures.
A framework that can accommodate data-mining–based research to ensure that past studies can be credited appropriately is an important consideration, such as for crediting work that was used to train a generative AI model. When using RAG in conjunction with BioinspiredLLM or MechGPT, for instance, credit to specific sources can be given that allows users to not only accurately answer complex questions but also understand key source citations. Such research can be further supported by efforts to build findable, accessible, interoperable, and reusable (FAIR) data and information in scientific publishing.[117] However, current publication frameworks do not typically accommodate such efforts even though all stakeholders, including publishers, academic institutions, and research communities, can benefit from thoughtful discussion about this evolving landscape of needs and opportunities by which scientific and technological progress can be enhanced. Other important considerations include who should own and profit from information generated by AI systems, and how can fair compensation or credit be ensured for those producing the training data?
While generative AI holds tremendous potential to contribute to scientific progress, it has introduced new fears in the public pertaining to the malicious use of large AI systems, the unintended harmful effects of AI misalignment, and other ethical concerns. Today we see ourselves at an inflection point in history. In one scenario, generative AI is pursued for the monetary gain of a few large corporations, and the technology becomes misunderstood by those other than the corporations that are training the largest models behind closed doors. In another scenario, generative AI is used as a tool by diverse groups of creators to forge a path to a radically different future, especially when used in open-source modality. Recent trends in powerful open-source models and platforms that drive such progress, such as Hugging Face, are important examples that help scientists foster a transparent strategy forward that enables humanity to contribute to the creation, use, and improvements of generative AI methods. In addition to ensuring access, the fact that many groups use and deploy models may yield much greater potential in bootstrapping generative models to achieve far greater innovations by connecting previously disconnected areas of thought, experiences, and frameworks toward even greater democratization of access and impact for all of humanity.
Focusing on the scenario that AI has the capacity to generate creatively, it must be considered that human creativity, although far more computationally complex and enigmatic than AI algorithms, is nevertheless an emergent physical phenomenon based on synthesizing past data to generate new actions.[118] While AI and human cognition likely function differently for now, there is little reason to believe human creativity is the only kind. Human–AI collaborations or multi-agent systems already show intriguing results that exceed those that have been generated by human intelligence alone. AI, engendered by its radically different modes of cognition, could push against the limitations of established thought and provide insight into an as-of-yet unimagined future. Future artificial general intelligence could come to be seen as another milestone in human evolution, helping us to overcome barriers thought to be impenetrable today.
What is striking, and perhaps the most impactful aspect, is that generative AI, particularly in the context of bio-inspiration, overcomes human biases inherently present in traditional design processes. Drawing from the nonhuman world, bio-inspired generative AI explores a diverse design space shaped by hierarchical biological morphologies at varying length scales. This departure from human-centric models simultaneously broadens design possibilities and aligns with sustainability principles, utilizing natural structures as blueprints for designs that seamlessly integrate with ecosystems. Motivated by the pursuit of knowledge, academia is well-suited for open-source AI research. Academia is uniquely positioned to pioneer innovative educational practices, potentially transforming learning methods for students, hypothesis generation for scientists, theorem discovery for mathematicians, and more.
Generative AI tools open the doors to rapidly reshaping the way we can discover, develop, and design materials. Amidst the pressing need to combat climate change and address sustainability concerns, innovative methods are required to transcend human-centric processes and habits, fostering harmony and education rather than conflict with nature. By learning from nature and leveraging generative AI to study biological insights, the human design space can be swiftly expanded. This exploration will unveil novel design principles and processes, bringing about much needed change to our materials ecosystem.
This material was supported by the MIT Generative AI initiative. This work was supported by the Army Research Office (W911NF1920098 and W911NF2220213), ONR (N00014-19-1-2375 and N00014-20-1-2189), and USDA (2021-69012-35978). Further support was provided by the National Science Foundation Graduate Research Fellowship under Grant No. 2141064 as well as the MIT-IBM Watson AI Lab.
1. Shen, S. C. et al. Computational Design and Manufacturing of Sustainable Materials through First-Principles and Materiomics. Chem Rev 123, 2242–2275 (2023).
2. Ritchie, H., and Roser, M. Sector by sector: where do global greenhouse gas emissions come from? Our World in Data (2023).
3. Branker, K., Jeswiet, J. and Kim, I. Y. Greenhouse gases emitted in manufacturing a product—A new economic model. CIRP Annals 60, 53–56 (2011).
4. Ivar Do Sul, J. A., and Costa, M. F. The present and future of microplastic pollution in the marine environment. Environmental Pollution 185, 352–364 (2014).
5. Eriksen, M. et al. Plastic Pollution in the World’s Oceans: More than 5 Trillion Plastic Pieces Weighing over 250,000 Tons Afloat at Sea. PLoS One 9, e111913 (2014).
6. Katiyar, N. K., Goel, G., Hawi, S., and Goel, S. Nature-inspired materials: Emerging trends and prospects. NPG Asia Materials 2021 13:1 13, 1–16 (2021).
7. Canter, N. Humpback whales inspire new wind turbine technology. Tribology and Lubrication Technology 64, 10–11 (2008).
8. Menon, C., Murphy, M., and Sitti, M. Gecko inspired surface climbing robots. Proceedings - 2004 IEEE International Conference on Robotics and Biomimetics, IEEE ROBIO 2004 431–436 (2004) doi:10.1109/ROBIO.2004.1521817.
9. Autumn, K., and Gravish, N. Gecko adhesion: evolutionary nanotechnology. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 366, 1575–1590 (2008).
10. Qin, Z., and Buehler, M. J. Molecular mechanics of mussel adhesion proteins. J Mech Phys Solids 62, (2014).
11. Gao, H., Wang, X., Yao, H., Gorb, S., and Arzt, E. Mechanics of hierarchical adhesion structures of geckos. Mechanics of Materials 37, 275–285 (2005).
12. Othmani, N. I. et al. Reviewing biomimicry design case studies as a solution to sustainable design. Environmental Science and Pollution Research 29, 69327–69340 (2022).
13. Zur, K., Barretta, R., Agarwal, R., Ruta, G., and Chayaamor-Heil, N. From Bioinspiration to Biomimicry in Architecture: Opportunities and Challenges. Encyclopedia 2023, Vol. 3, Pages 202-223 3, 202–223 (2023).
14. Cranford, S. W., and Buehler, M. J. Biomateriomics. (Springer Netherlands, 2012).
15. Ingrole, A., Aguirre, T. G., Fuller, L., and Donahue, S. W. Bioinspired energy absorbing material designs using additive manufacturing. J Mech Behav Biomed Mater 119, 104518 (2021).
16. Meyers, M. A., McKittrick, J., and Chen, P. Y. Structural biological materials: Critical mechanics-materials connections. Science (1979) 339, 773–779 (2013).
17. Meyers, M. A., Chen, P. Y., Lin, A. Y. M., and Seki, Y. Biological materials: Structure and mechanical properties. Prog Mater Sci 53, 1–206 (2008).
18. Lazarus, B. S. et al. Equine hoof wall: Structure, properties, and bioinspired designs. Acta Biomater 151, 426–445 (2022).
19. Wang, B., Yang, W., McKittrick, J., and Meyers, M. A. Keratin: Structure, mechanical properties, occurrence in biological organisms, and efforts at bioinspiration. Prog Mater Sci 76, 229–318 (2016).
20. Nepal, D. et al. Hierarchically structured bioinspired nanocomposites. Nature Materials 2022 22:1 22, 18–35 (2022).
21. Hayashi, C. Y., Shipley, N. H., and Lewis, R. V. Hypotheses that correlate the sequence, structure, and mechanical properties of spider silk proteins. Int J Biol Macromol 24, 271–275 (1999).
22. Lu, W., Lee, N. A., and Buehler, M. J. Modeling and design of heterogeneous hierarchical bioinspired spider web structures using deep learning and additive manufacturing. Proc Natl Acad Sci U S A 120, e2305273120 (2023).
23. Chen, P.-Y. et al. Structure and mechanical properties of selected biological materials. J Mech Behav Biomed Mater 1, 208–226 (2008).
24. Chen, P.-Y., Lin, A. Y.-M., McKittrick, J., and Meyers, M. A. Structure and mechanical properties of crab exoskeletons. Acta Biomater 4, 587–596 (2008).
25. Naleway, S. E., Taylor, J. R. A., Porter, M. M., Meyers, M. A., and McKittrick, J. Structure and mechanical properties of selected protective systems in marine organisms. Materials Science and Engineering: C 59, 1143–1167 (2016).
26. Meyers, M. A., Chen, P.-Y., Lin, A. Y.-M., and Seki, Y. Biological materials: Structure and mechanical properties. Prog Mater Sci 53, 1–206 (2008).
27. Lin, A. Y. M., Meyers, M. A., and Vecchio, K. S. Mechanical properties and structure of Strombus gigas, Tridacna gigas, and Haliotis rufescens sea shells: A comparative study. Materials Science and Engineering C 26, 1380–1389 (2006).
28. Karam, G. N., and Gibson, L. J. Biomimicking of animal quills and plant stems: natural cylindrical shells with foam cores. Materials Science and Engineering: C 2, 113–132 (1994).
29. Gibson, L. J. The hierarchical structure and mechanics of plant materials. J R Soc Interface 9, 2749–2766 (2012).
30. Espinosa, H. D., Rim, J. E., Barthelat, F., and Buehler, M. J. Merger of structure and material in nacre and bone - Perspectives on de novo biomimetic materials. Prog Mater Sci 54, 1059–1100 (2009).
31. Buehler, M. J. Computational and Theoretical Materiomics: Properties of Biological and de novo Bioinspired Materials. J Comput Theor Nanosci 7, 1203–1209 (2010).
32. Knowles, T. P. J., and Buehler, M. J. Nanomechanics of functional and pathological amyloid materials. Nature Nanotechnology 2011 6:8 6, 469–479 (2011).
33. Müller, R. et al. Biodiversifying bioinspiration. Bioinspir Biomim 13, 053001 (2018).
34. Luu, R. K., and Buehler, M. J. Materials Informatics Tools in the Context of Bio-Inspired Material Mechanics. J Appl Mech 90 (2023).
35. Ackbarow, T., and Buehler, M. J. Hierarchical coexistence of universality and diversity controls robustness and multi-functionality in protein materials. J Comput Theor Nanosci 5 (2008).
36. Ottino, J. The Nexus: Augmented Thinking for a Complex World--The New Convergence of Art, Technology, and Science (The MIT Press, 2022).
37. Kingma, D. P., and Welling, M. Auto-Encoding Variational Bayes. 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings (2013).
38. Kingma, D. P., and Welling, M. An Introduction to Variational Autoencoders. Foundations and Trends in Machine Learning 12, 307–392 (2019).
39. Goodfellow, I. et al. Generative Adversarial Networks. Commun ACM 63, 139–144 (2014).
40. Lew, A. J., and Buehler, M. J. Encoding and exploring latent design space of optimal material structures via a VAE-LSTM model. Forces in Mechanics 5, 100054 (2021).
41. Yu, C. H. et al. Hierarchical Multiresolution Design of Bioinspired Structural Composites Using Progressive Reinforcement Learning. Adv Theory Simul 5, 2200459 (2022).
42. Yang, Z., Yu, C. H., and Buehler, M. J. Deep learning model to predict complex stress and strain fields in hierarchical composites. Sci Adv 7, 10.1126/sciadv.abd7416 (2021).
43. Yang, Z., Hsu, Y.-C., and Buehler, M. J. Generative multiscale analysis of de novo proteome-inspired molecular structures and nanomechanical optimization using a VoxelPerceiver transformer model. J Mech Phys Solids 170, 105098 (2023).
44. Shen, S. C., and Buehler, M. J. Nature-inspired architected materials using unsupervised deep learning. Communications Engineering 2022 1:1 1, 1–15 (2022).
45. Vaswani, A. et al. Attention Is All You Need. In Advances in Neural Information Processing Systems (eds. Guyon, I. et al.) vol. 30 (Curran Associates, Inc., 2017).
46. Hu, Y., and Buehler, M. J. Deep language models for interpretative and predictive materials science. APL Mach. Learn 1, 10901 (2023).
47. Buehler, M. J. FieldPerceiver: Domain agnostic transformer model to predict multiscale physical fields and nonlinear material properties through neural ologs. Materials Today 57, 9–25 (2022).
48. Buehler, E. L., and Buehler, M. J. End-to-end prediction of multimaterial stress fields and fracture patterns using cycle-consistent adversarial and transformer neural networks. Biomedical Engineering Advances 4, 100038 (2022).
49. Buehler, M. J. Multiscale Modeling at the Interface of Molecular Mechanics and Natural Language through Attention Neural Networks. Acc Chem Res 55, 3387–3403 (2022).
50. Buehler, M. J. MeLM, a generative pretrained language modeling framework that solves forward and inverse mechanics problems. J Mech Phys Solids 181, 105454 (2023).
51. Esser, P., Rombach, R., and Ommer, B. Taming Transformers for High-Resolution Image Synthesis. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 12868–12878 (2020) doi:10.1109/CVPR46437.2021.01268.
52. Buehler, M. J. Modeling atomistic dynamic fracture mechanisms using a progressive transformer diffusion model. J. Appl. Mech. 89, 121009 (2022).
53. Yang, Z., and Buehler, M. J. Words to Matter: De novo Architected Materials Design Using Transformer Neural Networks. Front Mater 8, 740754 (2021).
54. Hsu, Y. C., Yang, Z., and Buehler, M. J. Generative design, manufacturing, and molecular modeling of 3D architected materials based on natural language input. APL Mater 10, 41107 (2022).
55. Buehler, M. J. DeepFlames: Neural network-driven self-assembly of flame particles into hierarchical structures. MRS Commun 12, 257–265 (2022).
56. Buehler, M. J. Diatom-inspired architected materials using language-based deep learning: Perception, transformation and manufacturing (2023).
57. Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 2021 596:7873 596, 583–589 (2021).
58. Baek, M. et al. Accurate prediction of protein structures and interactions using a three-track neural network. Science (1979) 373, 871–876 (2021).
59. Varadi, M. et al. AlphaFold Protein Structure Database: massively expanding the structural coverage of protein-sequence space with high-accuracy models. Nucleic Acids Res 50, D439–D444 (2022).
60. Tourlet, S., Radjasandirane, R., Diharce, J., and de Brevern, A. G. AlphaFold2 Update and Perspectives. BioMedInformatics 2023, Vol. 3, Pages 378-390 3, 378–390 (2023).
61. Høie, M. H. et al. NetSurfP-3.0: accurate and fast prediction of protein structural features by protein language models and deep learning. Nucleic Acids Res 50, W510–W515 (2022).
62. Zhang, B., Li, J., and Lü, Q. Prediction of 8-state protein secondary structures by a novel deep learning architecture. BMC Bioinformatics 19, 1–13 (2018).
63. Yu, C. H. et al. End-to-End Deep Learning Model to Predict and Design Secondary Structure Content of Structural Proteins. ACS Biomater Sci Eng 8, 1156–1165 (2022).
64. Elnaggar, A. et al. ProtTrans: Toward Understanding the Language of Life Through Self-Supervised Learning. IEEE Trans Pattern Anal Mach Intell 44, 7112–7127 (2022).
65. Tubiana, J., Schneidman-Duhovny, D., and Wolfson, H. J. ScanNet: an interpretable geometric deep learning model for structure-based protein binding site prediction. Nature Methods 2022 19:6 19, 730–739 (2022).
66. Sverrisson, F., Feydy, J., Correia, B. E., and Bronstein, M. M. Fast end-to-end learning on protein surfaces. bioRxiv 2020.12.28.424589 (2020) doi:10.1101/2020.12.28.424589.
67. Thumuluri, V. et al. NetSolP: predicting protein solubility in Escherichia coli using language models. Bioinformatics 38, 941–946 (2022).
68. Buehler, M. J. Generative pretrained autoregressive transformer graph neural network applied to the analysis and discovery of novel proteins. J Appl Phys 134, 84902 (2023).
69. Khare, E., Gonzalez-Obeso, C., Kaplan, D. L., and Buehler, M. J. CollagenTransformer: End-to-End Transformer Model to Predict Thermal Stability of Collagen Triple Helices Using an NLP Approach. ACS Biomater Sci Eng 8, 4301–4310 (2022).
70. Hu, Y., and Buehler, M. J. End-to-End Protein Normal Mode Frequency Predictions Using Language and Graph Models and Application to Sonification. ACS Nano 16, 20656–20670 (2022).
71. Guo, K., and Buehler, M. J. Rapid prediction of protein natural frequencies using graph neural networks. Digital Discovery 1, 277–285 (2022).
72. Liu, F. Y. C., Ni, B., and Buehler, M. J. PRESTO: Rapid protein mechanical strength prediction with an end-to-end deep learning model. Extreme Mech Lett 55, 101803 (2022).
73. Lew, A. J., and Buehler, M. J. A deep learning augmented genetic algorithm approach to polycrystalline 2D material fracture discovery and design. Appl Phys Rev 8, 041414 (2021).
74. Khare, E. et al. Discovering design principles of collagen molecular stability using a genetic algorithm, deep learning, and experimental validation. Proc Natl Acad Sci U S A 119, e2209524119 (2022).
75. Korendovych, I. V., and DeGrado, W. F. De novo protein design, a retrospective. Q Rev Biophys 53, e3 (2020).
76. Huang, P. S., Boyken, S. E., and Baker, D. The coming of age of de novo protein design. Nature 2016 537:7620 537, 320–327 (2016).
77. Ho, J., Jain, A., and Abbeel, P. Denoising Diffusion Probabilistic Models. Adv Neural Inf Process Syst 33, 6840–6851 (2020).
78. Marcus, G., Davis, E., and Aaronson, S. A very preliminary analysis of DALL-E 2 (2022). doi:10.48550/arxiv.2204.13807.
79. Saharia, C. et al. Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (2022). doi:10.48550/arxiv.2205.11487.
80. Rombach, R., Blattmann, A., Lorenz, D., Esser, P., and Ommer, B. High-Resolution Image Synthesis with Latent Diffusion Models. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition 2022-June, 10674–10685 (2021).
81. Makoś, M. Z., Verma, N., Larson, E. C., Freindorf, M., and Kraka, E. Generative adversarial networks for transition state geometry prediction. J Chem Phys 155, 024116 (2021).
82. Lebese, T., Mellado, B., and Ruan, X. The use of Generative Adversarial Networks to characterize new physics in multi-lepton final states at the LHC. International Journal of Modern Physics A (2021). doi:10.48550/arxiv.2105.14933.
83. Buehler, M. J. FieldPerceiver: Domain agnostic transformer model to predict multiscale physical fields and nonlinear material properties through neural ologs. Materials Today 57, 9–25 (2022).
84. Yang, Z., Yu, C. H., Guo, K., and Buehler, M. J. End-to-end deep learning method to predict complete strain and stress tensors for complex hierarchical composite microstructures. J Mech Phys Solids 154, 104506 (2021).
85. Ni, B., and Gao, H. A deep learning approach to the inverse problem of modulus identification in elasticity. MRS Bull 46, 19–25 (2021).
86. Ni, B., Kaplan, D. L., and Buehler, M. J. Generative design of de novo proteins based on secondary-structure constraints using an attention-based diffusion model. Chem 9, 1828–1849 (2023).
87. Ni, B., Kaplan, D. L., and Buehler, M. J. ForceGen: End-to-end de novo protein generation based on nonlinear mechanical unfolding responses using a protein language diffusion model (2023).
88. Su, I., and Buehler, M. J. Mesomechanics of a three-dimensional spider web. J Mech Phys Solids 144, 104096 (2020).
89. Su, I. et al. Imaging and analysis of a three-dimensional spider web architecture. J R Soc Interface 15, (2018).
90. Arakawa, K. et al. 1000 spider silkomes: Linking sequences to silk physical properties. Sci Adv 8, 6043 (2022).
91. Lu, W., Yang, Z., and Buehler, M. J. Rapid mechanical property prediction and de novo design of three-dimensional spider webs through graph and GraphPerceiver neural networks. J Appl Phys 132, 74703 (2022).
92. Lew, A. J., Jin, K., and Buehler, M. J. Designing architected materials for mechanical compression via simulation, deep learning, and experimentation. npj Computational Materials 2023 9:1 9, 1–9 (2023).
93. Lew, A. J., and Buehler, M. J. Single-shot forward and inverse hierarchical architected materials design for nonlinear mechanical properties using an Attention-Diffusion model. Materials Today 64, 10–20 (2023).
94. Luu, R. K., Wysokowski, M., and Buehler, M. J. Generative discovery of de novo chemical designs using diffusion modeling and transformer deep neural networks with application to deep eutectic solvents. Appl Phys Lett 122, 234103 (2023).
95. Wysokowski, M. et al. Untapped Potential of Deep Eutectic Solvents for the Synthesis of Bioinspired Inorganic-Organic Materials. Chemistry of Materials (2023) doi:10.1021/ACS.CHEMMATER.3C00847/ASSET/IMAGES/LARGE/CM3C00847_0010.JPEG.
96. Buehler, M. J. A computational building block approach towards multiscale architected materials analysis and design with application to hierarchical metal metamaterials. Model Simul Mat Sci Eng 31, 054001 (2023).
97. Bubeck, S. et al. Sparks of Artificial General Intelligence: Early experiments with GPT-4 (2023).
98. Gunasekar, S. et al. Textbooks Are All You Need (2023).
99. Phi-2: The surprising power of small language models - Microsoft Research. https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/.
100. Luu, R. K., and Buehler, M. J. BioinspiredLLM: Conversational Large Language Model for the Mechanics of Biological and Bio-Inspired Materials. Advanced Science 2306724 (2023) doi:10.1002/ADVS.202306724.
101. Buehler, M. J. MechGPT, a Language-Based Strategy for Mechanics and Materials Modeling That Connects Knowledge Across Scales, Disciplines and Modalities. Appl Mech Rev 1–82 (2023). doi:10.1115/1.4063843.
102. Buehler, M. J. Unsupervised cross-domain translation via deep learning and adversarial attention neural networks and application to music-inspired protein designs. Patterns 0, 100692 (2023).
103. Ni, B., and Buehler, M. J. MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge (2023).
104. Buehler, M. J. Generative retrieval-augmented ontologic graph and multi-agent strategies for interpretive large language model-based materials design (2023).
105. Lipton, Z. C. The mythos of model interpretability: In machine learning, the concept of interpretability is both important and slippery. Queue 16, 31–57 (2018).
106. Touvron, H. et al. Llama 2: Open Foundation and Fine-Tuned Chat Models.
107. Jiang, A. Q. et al. Mistral 7B (2023).
108. Li, Y. et al. Textbooks Are All You Need II: phi-1.5 technical report (2023).
109. Mitra, A. et al. Orca 2: Teaching Small Language Models How to Reason (2023).
110. A. Jiang et al., Mixtral of experts, https://arxiv.org/abs/2401.04088 (2024).
111. Kim, D. et al. SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling (2023).
112. Dettmers, T., Pagnoni, A., Holtzman, A., and Zettlemoyer, L. QLoRA: Efficient Finetuning of Quantized LLMs (2023).
113. Frantar, E., Ashkboos, S., Zurich, E., Hoefler, T., and Alistarh, D. GPTQ: Accurate Post-Training Quantization for Generative Pre-trained Transformers (2022).
114. Lee, N. A., Shen, S. C., and Buehler, M. J. An automated biomateriomics platform for sustainable programmable materials discovery. Matter 5, 3597–3613 (2022).
115. Shen, S. C. et al. Robust Myco-Composites as a Platform for Versatile Hybrid-Living Structural Materials (2023).
116. Brinson, L. C. et al. Community action on FAIR data will fuel a revolution in materials research. MRS Bull 1–5 (2023). doi:10.1557/S43577-023-00498-4/FIGURES/2.
117. What DALL-E reveals about human creativity. Wu Tsai Neurosciences Institute. https://neuroscience.stanford.edu/news/what-dall-e-reveals-about-human-creativity.