Intricate historic interweaving of textile manufacturing and digital technologies opens unlimited transformational opportunities at the dawn of the new ear of artificial intelligence. . .
Intricate historic interweaving of textile manufacturing and digital technologies opens unlimited transformational opportunities at the dawn of the new ear of artificial intelligence, which is re-shaping the industries and the fabric of human societies. While fashion and textile industries already leverage AI-powered tools for real-to-virtual transformation of products and processes, we make the case that AI can and should play a key role in enhancing virtual-to-real product transformation via generative design of textiles for manufacture. Although generative AI will inevitably impact workforce development, we are optimistic that it will also provide the designers, artisans, and hobbyists with new tools to preserve and elevate their craft.
Address correspondence to: Svetlana V. Boriskina, Massachusetts Institute of Technology, Cambridge, MA, 02139 (E-mail: [email protected])
Keywords: textiles, lacemaking, graph theory, design, manufacturing, CAD, computer vision, cultural heritage
Recent public release and viral adoption of large-language models by industry, governments, and the public is fueling the artificial intelligence (AI) revolution and marking a major turning point in the history of humankind. Previous turning points marked by the industrial and digital revolutions of the eighteenth and twentieth centuries led to a dramatic transformation of manufacturing industries and the social fabric of contemporary societies. Textile industry innovation is widely credited for fueling the industrial revolution, leading to the creation of new machinery and mass production of woven consumer goods. A lesser-known fact is that the digital revolution can also be traced to innovations in textile weaving, which used both binary and nonbinary code for information storage and exchange long before computers were invented.
The knotted-string khipu structures were used by Incas and other peoples of the central Andes for recording and exchanging not only statistical data but also narrative texts, as their nonbinary notation is believed to be complex enough to encode linguistic information (Figure 1a).1 When Charles Babbage released his plans for the Analytical Engine in 1837, Ada Lovelace commented that the engine “weaves algebraic patterns, just as the Jacquard loom weaves flowers and leaves.”2 The punch card of the Jacquard loom (just like the IBM punch cards a century later) worked on a binary-like system, in which a punched hole (1) or the absence of one (0) provided instructions on which warp threads should be raised to allow the weft thread to pass under them to create a complex pattern (Figure 1b). Finally, the first nonvolatile computer memory comprised tiny donuts of ferrite material strung on wires and handwoven by textile workers into fabric-like patterns (Figure 1c).3 The NASA Guidance Computer used on the first Apollo missions relied on such memory elements physically weaved into a high-density storage called ‘core rope memory.’
It is perhaps symbolic that AI language models such as ChatGPT, which are run by modern computers, tend to overuse textile-related words such as ‘weaving’ and ‘tapestry.’ AI’s own explanation for its obsession with the word ‘tapestry’ (a form of weft-facing woven textile) relies on the argument that the word implies something complex, detailed, and carefully constructed, combining rich imagery, symbolism, versatility, sensory appeal, and literary tradition.4 The complexity and symbolism of a well-crafted text share deep connection with the corresponding properties of an intricately crafted textile; both words even originate from the same Latin word ‘texere’—to weave.5 The combination of the language of words and textile patterns can become a powerful communication tool, most famously practiced by late Justice Ruth Bader Ginsburg through the sharp use of language punctuated by ornate lace collars she wore on the bench to communicate her opinions or dissents.6 Just like languages, traditional textile crafts offer a tremendous opportunity for AI-enabled generation of new knowledge, thus closing the loop between knowledge generation and storage and textile engineering.
Fast forwarding to today, studies show that many fashion brands are using AI to innovate their businesses, and 73% of fashion executives expect generative AI to be an important priority in 2024.7 However, the role of AI in today’s fashion and textile industries is largely limited to digital design and production quality control. The biggest impacts of AI revolution in textiles have been in the digital domain, including the management of supply chains to reduce overstock and waste or enhancing the shopping experience with natural language–powered shopping assistants. In turn, virtual marketing campaigns and virtual try-on experiences leverage AI-powered ‘real-to-virtual’ transformation of products and customers to their digital twins and virtual avatars, respectively, while textile museums are digitizing their collection of fabric patterns for the heritage data preservation and analysis. We believe that AI can and should play a key role in enhancing ‘virtual-to-real’ product transformation via the generative design of textiles for manufacture. AI capabilities in this field remain largely untapped and need to be developed to engineer enhanced mechanical properties, transform textile manufacturing processes, and enable smart textiles applications.
Here, we propose and discuss a generative AI–enabled pipeline to design textiles for manufacture, which integrates historical pattern collection studies, mathematical modeling, mechanical characterization, computer vision deep learning, and lacemaking knowledge. While our pipeline is applicable to any textile type, we focus on bobbin lace as an intricate, challenging example of an endangered handicraft important to textile heritage (Figure 2a). Lacemaking draws from elements of weaving, embroidering/sewing, and knitting, all current methods of mass textile manufacture, but adds the challenge of holes and negative space in the fabric, forming an intricate pattern. Relative to more conventional woven or knitted textiles, the open net structure of lace textiles provides additional degrees of freedom in tensile properties engineering, which can be leveraged for modern applications in wearable, medical, industrial, and geo textiles. AI-generated lace patterns can be optimized for aesthetic appeal, cultural relevance, elasticity, tensile strength, Poisson ratio, and other mechanical characteristics as well as for the integration of conductive threads and electronic components.
Historically, lace has been an indicator of wealth, class, and decorative flair but has fallen out of popularity in modern fashion. Lacemaking was once a valuable source of income for poor women, skilled artisans who advanced the craft but often remained uncredited for their artistry and engineering skills. The cheap mass production of chemical lace and bobbinet machines led to a decline in handmade lacemaking and devalued the intricate craftsmanship and cultural significance of traditional techniques. Nowadays, designers, artists, and writers fear similar displacement by generative AI technologies.
Here, we make the case that generative AI can bridge the gap between historical craftsmanship and contemporary technology to create a sustainable model for the preservation and evolution of textile heritage crafts. If leveraged properly, AI can balance the valuable contributions of historical and current lacemaking practitioners to enable the revival of the lacemaking craft not merely as a historical relic but as a living, evolving art form that adapts to the needs and aesthetics of the present.
Traditionally, lace has been a culturally significant commodity. It was used by European nobility in the seventeenth and eighteenth centuries to display social status, wealth, and fashion trends. Lacemaking was typically practiced by women and children, providing means to contribute to their household finances or a dowry, make money of their own, and have a skilled trade that they could take pride in. The industrial revolution brought machinery to the world of lacemaking and changed the craft. While before, lacemaking was primarily done in houses by groups of women, it now moved to factories, leading to an increase in productivity at the expense of shuttering down small-scale handmade production. Nevertheless, nostalgia and longing for the handmade craftsmanship of previous times helped to preserve the practice of traditional handmade lacemaking.8
Despite market changes and demands, lacemaking styles evolved overtime to become pillars of practicing communities’ culture and heritage and played a major role in cultural exchange between countries and continents (Figure 2b). Having originated and flourished in Europe, lacemaking craft was inspired by intricate patterns found in Middle Eastern and Asian woven textiles.9 Designing and preserving lacemaking patterns has always held deep value and importance, from the noble women of 1600s Venice, Italy, boasting books containing needlepoint lace patterns from all over Europe to twenty-first century women in Central Slovakia taking great pride in and marketing their pattern collections.10 Further evolution and fusion of lacemaking styles were shaped by immigrants from different parts of Europe moving to new countries, bringing their lacemaking practices with them, and ultimately forming a new style as they mixed their craft with those of the new communities.
Unfortunately, some unique lacemaking techniques and artforms are endangered or vanishing, such as Spanier Arbeit, a unique metal wire–based two- and three-dimensional bobbin lace exclusive to Ashkenazi Jewish production, which was decimated by the Holocaust.11 Even within actively-practicing communities, pattern heritage preservation is not consistently maintained. As an example, within the Central Slovakian lacemaking community, the older and younger generations have diverging views on lacemaking instruction and pattern preservation. The older generations view lacemaking as a craft to be learned directly from a master through examples and practice with direct supervision until the pupil develops an intuition for the craft. Many older lace makers do not keep their patterns after using them, sometimes even burning patterns they do not like—a practice that is viewed as irresponsible by younger craftsmen who wish to preserve the cultural knowledge embedded in patterns made by master lace makers. Younger generations still place high value on in-person verbal instruction, favoring both pattern sharing and lacemaking workshops.12
For an AI model to produce new lacemaking patterns, it must train on a large database of existing historical patterns. Given the age of the lacemaking craft, copyright and intellectual property issues should not be a barrier to access and use training data for generative AI models. Training on historical lacemaking books in the public domain will enable the generation of new digital design-for-manufacture tools while benefiting the craft by preserving lace patterns from being lost to time with aging older generations and declining interest.13 As generative AI will inevitably impact industries and workforce development, it is also important to consider what artisans and cultural heritage may be displaced by new technology or, to the contrary, provided with new tools to preserve and elevate the lacemaking craft.
Like many industries, the fashion industry in general seeks to reap the benefits of AI technology. Big data surrounds fashion, and companies hope to use this information to train generative models to create relevant and practical designs. Generative models are compelling for their ability to take advantage of real-time data and trends, personalize output to user preferences, and accelerate productivity.14 Historically, interactive genetic algorithms (IGA) have been adopted to inform the computer-aided design of fashion and evolve designs based on previous ones.15,16,17,18 While conventional genetic algorithms emulate natural selection and solve optimization problems by evaluating the fitness of each candidate solution against a predefined objective function, IGA allows the fitness function to be chosen by the user. However, these methods are still limited because they rely on discrete features such as style, silhouette, color, and pattern instead of using the overall design composition or microscopic textile structure as an input. In turn, neural networks have been used in the creative design process to pair fashion with human emotion, to recommend fashion looks based on previous user preferences, or to perform textile classification.19,20,21,22
Generative adversarial networks (GANs) created by Goodfellow et al. in 2014 set the tone for deep generative AI and opened new horizons in fashion design applications.23 GAN is an unsupervised deep learning architecture, which generates new data via two neural network components, the generator and discriminator. The generator creates the new data samples (e.g., images), which are then mixed with real data samples and run through the discriminator tasked with correctly classifying these sample types. Since their creation, GANs have already had an impact in generating new fashion designs and have been recently joined by variational autoencoders (VAEs) and diffusion models (Figure 3).24,25,26,27,28,29 Each of these deep learning methods encodes stylistic information into a lower-dimensional latent space and represents different styles as a probability distribution, which can be sampled. Deep learning has aided in textile visualization using image-to-image transfer, specifically neural style transfer, a technique commonly used in computer vision and machine learning in general. The process translates the semantic content of data of one domain to another.30,31,32,33,34 For instance, in Beg and Yu (2020), a user-uploaded image is transformed into an embroidered stitch, and in Wang, Xiong, and Cai (2020), a garment’s pattern in one image is transformed into the shape of another.35,36
Recently, StyleGAN was developed by a group of NVIDIA researchers with the goal of improving GAN to generate highly realistic images. StyleGAN introduces an intermediate latent space that controls “styles” of the generated image during the image generation process, such as textile texture, pattern design, and gradient.37 This allows the model to have precise control over various aspects of the image, which the model adjusts as it learns to produce the most realistic image possible. Furthermore, the model also incorporates other techniques that improve the realism and resolution of the model’s outputs, such as progressive growing, mixed multiresolution technique, and noise inputs in the generator model. As a result, it generates high-quality images that exhibit a diverse range of appearances and structures. Due to these attributes, StyleGAN excels in creating detailed textile images, despite not specializing in textile generation.
In a trial survey of StyleGAN involving 200 users, knitted textile images were translated into swatches, and it was found that these swatches “were rated overall more creative, fashionable, and buyable than ones based on the real knitted textile images”.38 The later version, StyleGAN2, improves upon the realism and resolution of StyleGAN’s outputs by removing artifacts and addressing some limitations.39 With these changes, StyleGAN2 achieves the state of the art in realistic textile pattern generation, showing proficiency in resolution, texture details, and periodicity. Such methods would be especially useful as built-in digital tools for fashion designers depending on visual specifications such as color or stitch type. Entire libraries of new patterns can be quickly generated with these deep learning models.
So far, generative AI models have worked on image generation for textile patterns and have not yet fully evolved to the level of creating reproducible textiles. Despite advancements in knitted pattern generation and even symmetrical lace pattern generation in SStyleGAN, the current phase of AI-generated content typically takes the form of pieced-together images that are not yet manufacturable.40,41 In this context, we define manufacturability as the ability of an AI model to generate a set of instructions sufficient to produce a textile from a generated pattern. For AI-generated textiles, the challenge lies not only in creating a generative AI model capable of producing patterns on demand but also in ensuring patterns are complete, physically possible to be made, and encoded to be made by hand or machine. Accordingly, we identify three critical stages in the AI-enabled textile design-for-manufacture process: attribute-specific pattern generation, process-specific instructions encoding, and physical fabrication. The attributes may comprise aesthetic as well as textural, mechanical, or thermal features, while process-specific instructions can take many different forms, depending on the choice of the manufacturing process (i.e., weaving, knitting, bobbin lacemaking, three-dimensional printing, or embroidery).
Modern knitting is a fully computerized textile construction technique, which creates patterns from interlacing yarn loops comprising various stitch types (e.g., knit, purl, tuck, flow, etc.) used as pixels (Figure 4). Punch cards, originally created for the Jacquard loom, have provided a way of encoding lace in a binary way while retaining the ability to make intricate designs. They have been used for the production of different lace styles such as Fair Isle, punch lace, knit weave, or tuck stitch. Fair Isle knitting, for instance, is a method used to create patterns with multiple colors via a stranding technique (process of moving strands along the back of a work). In this example, the binary of the punch card pattern represents the assorted colors of the lace. Fair Isle knit designs are typically seen as pixelated designs that are repeated and mirrored to form patterns. In turn, the tuck lace technique allows for more geometric diversity in the lace topology since it allows for holes to open and the density of the overall design to vary. AI pattern generation from punch card images by GAN and neural style transfer techniques has been used to produce Fair Isle knitted laces.42
Machine knitting has been recently revolutionized by several practices, including (i) whole garment knitting, which enables seamless three-dimensional garment manufacturing, (ii) assembly of knit primitives (such as tubes, sheets) into low-level machine instructions, and (iii) pipelines to generate new patterns from pre-existing ones.43,44,45,46,47,48 These innovations simplify assembly of complex stitches, but do not generate new patterns.
Human-understandable knit instructions were produced by SkyKnit, a model trained by Janelle Shane with help from the online knit community.49 The model was trained by crowdsourced knit data and verified by the same knit community. Knitters tried to verify the feasibility of SkyKnit designs, but outputs often did not make sense and skipped necessary instructions—forcing knitters to use domain knowledge to fill in the gaps. In another model called DeepKnit, machine-understandable instructions were generated instead.50 This deep learning model incorporated the constraints of knit machines and constructed low-level instructions from higher-level design specifications. DeepKnit can generate new patterns using a long short-term memory architecture, taking in stitch operations in the form of one-dimensional sequences of tokens; this model allowed treating knit data similarly to formal sentences/language modeling and generating syntactically correct instructions. DeepKnit helped to identify an important interplay between knittability and uniqueness, by demonstrating that increased knittability decreases the uniqueness and vice versa. These observations provide background for structural understanding of textiles and impose manufacturability constrains on pattern design.
Another deep learning approach to establish a design-for-manufacture pipeline maps real and synthetic (program-generated) images to knit instructions.51 Real images of knitted textiles are first refined into a regular synthetic view and then classified into 17 possible knit instructions for a Shima Seiki knitting machine (see an example of such instructions in Figure 3). Such a model should be used to postprocess AI-generated knit images to fabricate newly generated designs. We believe that to maintain cohesion between the generation (such as StyleGAN2) and instruction-translation models, the same images should be used to train each model individually. Higher ubiquity between the datasets of knit textiles and their corresponding instructions is essential to advance AI-enabled design for manufacture.
Differently from knitting, bobbin lace is made by braiding and twisting filaments or yarns, which are wound on multiple bobbins (Figure 2). Simple movements of the bobbins (e.g., twists and crosses, Figure 5a) create stitches according to a predefined pattern. Manual bobbin lace technique uses patterns drawn on paper or parchment and pinned to a lace pillow, where the placement of the pins determines the pathway for the lace stitches (Figure 5b). Handcrafted bobbin lace instruction typically includes both an encoded sequence of stitches and a visual model of pricking patterns, subject to interpretation on what stitches to do at each point of intersection, and checking completion is a critical evaluation when determining pattern feasibility. It involves accounting for each thread, even when a piece may be worked in a nonsequential way, and related edge effects. The aspect of pattern interpretation makes it necessary to co-create a functional generative AI model with the insights of current lacemaking practitioners.
The bobbin lace technique has been mechanized and digitized for mass production by the invention of the Leavers loom in 1813. John Lever adapted a Jacquard loom head for use with the bobbin net machine engineered by John Heathcote in 1808 (Figure 6a), allowing for complex lace (the inset of Figure 6a) to be machine made by imitating the basic movements of the handmade technique. Leavers looms produce lace by intertwining two sets of threads: (i) the warp and beam threads that are actuated by the Jacquard mechanism move right or left and (ii) the bobbin threads, which always move along the same path as the bobbins, swing back and forth in a pendulum-like motion. The patterns created by the loom can be digitized by using a binary code with punch cards or computer codes.
An alternative lace encoding approach has been recently proposed to represent these patterns as graphs, which allows effectively integrating bobbin lace patterns as quantifiable data representations.52 Bobbin lace patterns can be represented as simple graphs known as grounds, in which nodes represent an encoding of actions (twist, cross, etc.) to be done at said node, and edges represent topological threads between each lace. Each combination of actions and threads can be represented by a special syntax (Figure 5a). For example, if we wanted to represent an a la Vierge lace, we physically twist two threads together, do this for another two threads, and then cross them. We finally place a pin in the middle. The same syntax, however, can be used to encode different types of lace patterns. As an example, [(TC p TC)] encoding—in which T represents a twist, C a cross, and p a pin—may be used to encode a la Vierge, Grille, and Torchon lace patterns, which all look distinctly different and exhibit different mechanical properties.53
Graph representation is important in generative AI, as it allows models to understand and learn the intricate and complex lace patterns as capturable structures and relationships. Furthermore, abstracting patterns into edges and nodes allows easy manipulation of the data, which can aid in the models feature identification and extraction. The ability to add, remove, or alter individual nodes and edges provides the freedom to the AI models in experimenting with variations of the pattern while maintaining the essential attributes of the lace. By representing these bobbin lace patterns as graphs (Figure 6b-d), we can use state-of-the-art generative AI models, such as graph neural networks, to further explore and innovate bobbin lace textile designs. Graph neural networks particularly excel at processing graph data, allowing them to capture complex features and relationships between edges and nodes. As a result, they can generate not only realistic textile graphs that reflect these relationships but also reproducible instructions to create them. This is not something image generation models can achieve, as they do not have actual stitch data.
The adaptation of three-dimensional (3D) printing techniques for lace manufacture faces a completely different challenge. While a sophisticated and flexible system of coding 3D patterns is well developed and standardized, reaching the same level of material flexibility, aesthetics, and production rate as those achieved with knitted or bobbin lace remains a challenge. On the other hand, different additive manufacturing techniques—including selective laser sintering, fused deposition modeling, and two-photon lithography—make use of the well-developed software tools such as Rhinoceros and Autodesk 123D 3D computer-aided design (CAD) modeling tools.54,55,56,57 CAD files are further converted into either the stereolithography or the additive manufacturing file format by tiling or tessellating the models’ geometrical surfaces with triangles. As a result, a 3D model is converted into a geometric (G)-code—a machine instruction file with the instructions of the print path—and then into physical patterns via 3D printing techniques (Figure 7a-d).58,59
Using the methodology of G-code to generate lace structures also opens the opportunity of constructing 3D interlocking lace patterns using digital embroidery. In this process, repeated patterns produced by a computer-controlled embroidery machine in layers can build up new interlocking mechanisms. While each layer has its own mechanical properties, the construction creates joint spots that interlock between the layers for the creation of 3D lace when the dissolvable backing fabric is removed. The embroidered lace process is similar to chemical lace production but allows for increased complexity by using a 3D layering technique. In contrast to bobbin lace, embroidery—which traces its origins to the needlepoint lace techniques—uses only two continuous threads per layer, which allows it to be produced on more commonly available machines. Our process exploration invites the potential to embed an AI system into existing G-code methodologies to generate layer images for these lace structures with unique properties and intricate overlaid patterns.
While aesthetic properties of lace structures may be driving their consumer appeal, it is the mechanical properties that play a critical role in determining the suitability of different patterns for various applications. In this context, key attributes include tensile strength, elasticity, dimensional stability, fineness, and texture. The consideration for each of these attributes allows for the creation of complex lace patterns, including structures that can bear distributed loads or move in interesting ways like auxetics in metamaterials. Similarly, creating lace that can deform either uniformly or nonuniformly in a predesigned fashion gives value to wearables or other scaffold structures. Generative AI is expected to expend the possibilities of incorporating these attributes and nontraditional materials in lacemaking practice to both elevate the craft to be of research significance and bring functional and aesthetic value to modern textiles.
When developing advanced generative AI models under the design-for-manufacture paradigm, tensile properties of different patterns should be evaluated and included in the model training protocol (Fig. 8a).60 Along with the pattern ultimate strength, other important attributes include Young’s modulus and the characteristic transition strain (which can be calculated by fitting the tensile curve with a bilinear model as shown in Fig. 8b as well as porosity, defined by the interplay between solid threads and open spaces (Fig. 8c).61 High porosity in lace can enhance its visual appeal by creating a more intricate and delicate design, allowing light and air to pass through more freely. It also influences the fabric's weight and flexibility as well as moisture transport. From an economic perspective, increasing porosity without compromising mechanical strength can lead to more efficient use of materials and production processes, resulting in more sustainable production practices.
Figure 8 shows characteristic tensile curves and images of three different types of commercial cotton bobbin lace. These data reveal how specific design elements may contribute to the overall strength, stretchability, and resilience of these lace samples. Sample S1 includes oval woven patterns (known as spiders in bobbin lace), which are denser and more tightly interlocked, providing additional structural integrity and resistance to mechanical forces. As a result, the lace with pattern S1 can stretch significantly under tension while maintaining a moderate resistance to tearing, which is beneficial for applications requiring both flexibility and strength (Figure 8a). In contrast, sample S3 shows erratic behavior under stress, with different pattern elements failing at different points during the test. Various stages of pattern deformation under increasing strain have been captured visually at periodic strain intervals (Figure 8c). Initially, at lower strain levels, the lace exhibits minor alterations in its intricate patterns, maintaining its overall structure and coherence. As the strain increases, the fibers stretch, and the spaces within the lace’s patterns begin to expand until some regions (and eventually the whole pattern) fail.
Previous experimental work also revealed that even uniform net structures—such as, for example, a simple jersey knit pattern—can exhibit complex nonlinear dynamics of spatial deformation nucleation and propagation under tensile strain.62 To generate patterns with desired spatial stability, the recorded images of tested samples (Figure 8c) can be processed to calculate the local stitch displacement field as an additional attribute to be included in the AI model training.
Incorporating different materials in lacemaking practice presents new opportunities to innovate and extend the functionality of textiles. For example, lace samples using high-conductivity infrared-transparent polyethylene (PE) fibers could aid in amplifying the cooling effects of the fiber when intertwined in an open lace structure, also allowing for better aesthetic adoption and different ways to stretch (Figure 9a).63 Highly smooth surfaces of PE threads as well as other synthetic threads extend the range of mechanical properties of lace samples even if the same structural pattern is used. Our tensile testing revealed that compared to the cotton bobbin lace, the PE bobbin lace tensile curve is smooth and reaches a higher peak at ~150% strain, indicating that both tenacity and Young’s modulus are higher by at least a factor of two. The images of PE lace taken at varying strain levels (Figure 9b) confirm the tensile curve data, revealing that the lace exhibits superior elasticity and strength, which can be attributed to both stronger fibers and pattern rearrangement owing to the threads slippage under deformation.
Other types of exotic fibers such as conductive thread (Figure 9c) or microelectronic functional fibers (Figure 9d) can be incorporated into lace patterns. Historically, spanier arbeit and similar techniques found in other cultures used metal wires to produce both 2D and 3D lace structures, and the achieved patterns may provide an initial dataset for AI model training on using these types of materials in advanced lace construction.64 Modern lacemaking practice also makes use of gimps, thicker threads woven during the lacemaking process to outline a motif, which can easily extend to the introduction of nontraditional functional fibers. The incorporation of nontraditional threads and elements into the lace can be accounted for by additional attributes used for the AI model training to condition the generated patterns for specific thread properties or to protect from deformation select areas of the pattern where the conductive threads are incorporated.
Computer vision can help inform generative models that generate visually compelling designs through detail and feature extraction, deriving strain from deformation for granular textile structure characterization. Through analyzing detailed attributes derived from high-resolution lace images taken under varying strain conditions, computer vision systems can generate quantifiable data like strain heat maps, in which variations in color and intensity indicate different levels of strain across the lace.65 This data can then be encoded in lace structures (nodes, edges) that help inform IGA models, influencing fitness functions by incorporating both visual appeal and functional performance metrics.
The process can be further improved by applying dimensionality reduction and filtration techniques. Feature generation techniques such as principal component analysis and Fourier transform can be used to reduce dimensionality and allow for the precise prediction of the lace tensile properties, allowing for the extraction of noteworthy features from complex datasets. Also, texture analysis methods such as the gray level co-occurrence matrix and Gabor filters can analyze surface properties and microstructures of materials, which correlate with mechanical properties such as ductility and strength.66,67 Users can then select desired designs based on these comprehensive generations, guiding the IGA to evolve these selections into new variations that aim to optimize aesthetic qualities and mechanical properties simultaneously.
The strain maps can serve as surrogate models for a generative AI, substituting textile CAD models and finite element analysis, which simulate detailed fiber–fiber interactions and forces well but have many drawbacks.68,69,70,71,72 Surrogate models serve as faster or simplified representations of these highly complex models, constructed using more computationally efficient algorithms and trained with original model outputs. They emulate the behavior of the original model across the input space, which enables researchers to quickly evaluate material properties under varied input parameters, aiding the process of generating novel material compositions and structures with desired properties. This approach offers a simplified alternative that expedites the design process and reduces reliance on physical prototyping, though it necessitates further refinement to match the predictive accuracy of traditional methods. Enhancing the capability of property labeling and improving the training algorithms to incorporate comprehensive material data will improve the manufacturability and utility of generated lace patterns, and future developments should aim to establish a multidisciplinary framework that integrates material science, engineering, and design principles.
Here, we introduce a potential design pipeline allowing for lace properties to be engineered for modern uses (Figure 10). The pipeline aims to iteratively optimize the tensile properties of geometric designs produced by generative AI and can be expanded to add additional attributes, such as moisture transport or transparency to visible or infrared light. We suggest the surrounding analytical framework for property engineering and conditioning of the generative model. Our integration of property labels into the process of generating new lace designs relies on using several stages of image processing, feature detection, and machine learning.
The procedure starts with the application of noise reduction techniques (for instance, Gaussian blur or median filtering, which are crucial for minimizing sensor noise and enhancing the clarity of the captured images) when taking videos of lace on the mechanical tester, which are then separated into individual lace input images. This step ensures that the subsequent analysis is based on high-quality inputs, reducing errors in feature detection. Following noise reduction, contrast enhancement through histogram equalization or adaptive contrast techniques is performed to make the lace patterns more discernible, facilitating the identification and differentiation of intricate details crucial for feature detection algorithms.73 Robust feature detection algorithms like SIFT (Scale-Invariant Feature Transform) or SURF (Speeded-Up Robust Features) are then applied to identify and describe key points in the lace images.74 These features are invariant to scale and rotation, making them ideal for tracking changes in the lace patterns under varying conditions.
We can then introduce markers—such as small colored dots—to the surface of a lace sample, after which color segmentation or shape-based detection algorithms are utilized to identify these markers, serving as reference points for deformation tracking. The feature-tracking phase employs techniques such as the Lucas–Kanade method of optical flow to compute motion vectors of selected features or markers from one frame to the next, facilitating continuous tracking of the features.75 For discrete analysis feature matching based on their descriptors, Euclidean distance calculations and Lowe’s ratio test are utilized to determine the movement of each feature as tension increases, providing insights into the mechanical properties of the lace material.76 By using a reference object of known dimensions in the images (e.g., a ruler), we can ascertain a scale factor, converting measurements from pixels to real-world units (e.g., millimeters). This scale factor is crucial for precise deformation measurement and analysis, in which the software calculates strain by determining the change in length relative to the original length. Shear strain calculations may also be incorporated if required, providing a comprehensive analysis of material behavior under stress.77 Colors can then be mapped to strain, typically from blue to red, with blue being low strain and red being high strain. Visualization software tools like MATLAB, Python (with libraries like Matplotlib or OpenCV) can then be used to create the visualizations that are strain maps (see examples in Figure 11).
In the generative phase, property labels generated from the feature analysis and resultant textile structure mechanical characterization, for example, can be used to condition the generative model. In conditional GANs, the generator and discriminator are conditioned on the labels to produce outputs that closely match the labeled features. This process directs the generator to produce lace designs that reflect the desired characteristics. Conditioning in generative models such as VAEs and GANs encompasses techniques aimed at guiding the generation of new data samples based on specific attributes or characteristics. In the context of conditional VAEs, conditioning involves both the encoder and decoder undergoing parameter adjustments based on labels or attributes associated with data during training. This process enables the model to learn to generate new data samples that conform to these specified labels or attributes.
Various other methods of conditioning exist, including training the model on labeled data, the integration of a supplementary loss term into the training objective, and through adversarial training. Foundationally, training the model on labeled data involves incorporating class labels or specific attributes into the training process to guide the model to learn the relationship between input features and specified attributes, so generated samples adhere to provided labels and exhibit desired characteristics. The integration of a loss term into the training objective serves to penalize deviations from desired attributes, incentivizing the model to generate samples aligned with specified conditions, and is typically represented as a mathematical function measuring the difference between predicted outputs and ground truth attributes. Another approach involves modifying the latent space distribution to encode desired attributes, in which the mean and variance parameters of the latent distribution are adjusted. Specifically, for each desired attribute or category, the mean vector of the latent distribution is adjusted to capture the average latent representation corresponding to that attribute. Variance parameters control the spread or variability of these latent representations, as they influence the shape of the latent distribution. Generally, decreasing variance produces more deterministic generations that are close to specified attributes, whereas increasing variance introduces more randomness. Adversarial training techniques such as conditional GANs can also be used to modify latent space distribution and involve conditioning an additional discriminator network on desired attributes and then training it to distinguish between real and generated samples based on these attributes. This encourages the generator to explore the latent space in a way that captures desired attribute information as it strives to create realistic and specific outputs.
There is much exploration and creativity in balancing the aesthetic and functional attributes in generated bobbin lace designs. In commercial clothing and accessories, there has always been a necessity for structural integrity and aesthetic function, for example, in a handbag: the bag must be beautiful in color and design but also have a handle, base, and finish that wears evenly over time, a combination of structural reinforcement through layering and material choice. In a generated bobbin lace design, for example, AI could help place a large decorative bird motif at a portion of the textile that undergoes lower strain and wear, helping the motif stay intact without compromising overall durability. Alternatively, decorative motifs such as flowers can potentially be generated in varying degrees of structural strength, which is tied closely to the density of nodes within a design. Such generative AI then becomes a creative aide to the designer and textile maker, helping retain the artistry and heritage of lace while providing specific data-driven insight otherwise difficult to access.
In addition to the data processing and model conditioning to complete the pipeline, also suggested is a graphical user interface that allows users to select or specify desired properties, such as evenly spread strain, points where stress originates, and size and shape of the lace piece. This graphical user interface could include multidimensional inputs for selecting labels corresponding to different lace features such as pattern density and stitch type. This would enable users to dynamically update a preview of the potential lace design, refine their selections based on real-time feedback, and iteratively develop the final design. The culmination of this process could integrate all data and results into a cohesive analysis report, featuring visualizations of motion vectors, strain maps, and 3D models of the lace deformation.
When discussing the impacts of generative AI on handicrafts, it is relevant and critical to incorporate human perspective. The AI-generated art has already caused controversy by displacing some visual artists’ work with images generated by large language models (LLMs). To get more insight into the needs and opinions from both handicraft experts and learners, we conducted an introductory bobbin lace workshop in mid-April 2024 at the Massachusetts Institute of Technology (MIT) targeting a younger audience. The workshop aimed to discuss sentiments towards bobbin lace and the broad integration of AI in artistic domains, particularly textile crafts. We also used it as a vehicle to gather qualitative data on the learning processes associated with a less popular traditional fiber art craft. Participants were nineteen- to twenty-three-year-old undergraduate students at MIT who expressed preexisting interest in fiber art crafts and no prior experience making lace. The response to the workshop was overwhelmingly positive, with over 50 students registering their interest within two days, indicating existing interest in lace craft. Essential bobbin lace materials were prepared and provided cost free to participants, including bobbins, thread, ethafoam pillows, pricking charts, and instructional sheets. Shayna Ahteck, a member of the research team and experienced lacemaking practitioner, gave live instruction for making a piece of torchon bobbin lace, covering all phases from winding bobbins to tying off and finishing.
Participants highlighted the value of in-person instruction, noting it as crucial for overcoming the challenges of self-learning in the absence of clear and accessible online resources. The use of pricking charts alongside finished examples was particularly beneficial, with one participant remarking, “I don’t think I would have learned without both diagrams and live demos.” During the workshop, we engaged students in discussions to identify potential improvements in learning facilitation. Many suggestions focused on simplifying complex tasks and enhancing graphical representations of thread movements. We therefore see a need and opportunity for an AI assistant to help with learning how to make lace stitches, lowering the barrier to entry and supplementing the efforts of a lace teacher. Participants also observed how lacemaking is a time-intensive craft, with greater appreciation for the labor involved in producing elaborate handmade lace patterns. Overall, feedback underscored the importance of community outreach and educational events as platforms for collaborative ideation.
The workshop concluded with a structured discussion about the role of AI in the arts, particularly pertaining to textile crafts. While participants were receptive to experimenting with AI tools, some expressed reservations about the creative quality and practical applicability of the outputs generated by current AI technologies, such as Midjourney. They pointed out that many AI-generated patterns were less creative than human-generated ones, of perceived low quality, or impractical for actual creation. To date, AI-generated patterns for hand crafts have indeed fared worse than those created by human artists and hobbyists. Prompts to create AI-generated crochet patterns returned hilariously bad LLM-created instructions and deceptive images that are distorted or structurally impossible to complete.78 Relatedly, a participant in the workshop expressed a sense of security in their domain of sewn bespoke clothing as being “safe” from generative AI.
Despite automated sewing being technically possible, there are technical challenges in handling soft fabrics and designing proper garment construction, ethical issues of sweatshop labor, and persistent cultural desires for handmade clothing. Just as hobby sewists, fashion designers, and tailors flourish despite fast fashion and cheaply available clothing, we should embrace lacemaking as a high-end craft while textile machining pipelines meet demand for mass-produced lace. Here, we underscore that generative AI can supplement the individual lacemaking designer as a design resource while still retaining the human touch in lace craft.
For generative AI to have an impact on textile craft and engineering, there must be pathways to create in the physical world from digital designs. In this paper, we identified challenges in the model creation and manufacturing pipelines within the rapidly evolving ecosystem of AI/machine learning techniques and applications in the textile arts and industry. We chose to focus on bobbin lace as an example to experiment with a lesser-known textile craft that combines elements from many different textile domains and offers new opportunities utilizing an open net structure. Choosing the complex craft of lacemaking also pays homage to the roots of computing from textile manufacturing Jacquard looms and punch cards. As we progress into the generative AI revolution, it is relevant to look to how we might be inspired by and integrate the textiles woven into the fabric of computing culture and the world’s culture overall.
Despite generative AI threatening to displace artists, we highlight the need to collaborate with artisans and technologists to thoughtfully shape the evolution of AI to be informed by embodied heritage knowledge of the lacemaker and the invaluable nature of human skills and learning communities. In return, generative AI can augment and assist textile creative processes in the ideation stage through renders, simulations, or data-assisted optimization. For all our efforts, manufacturability only matters if there are still people left to make it. With a hopeful outlook on design tools for textiles and wearables incorporating generative AI assistance, there can be space for generative AI to assist with lace pattern design for stakeholders in scientific, industrial, and artistic domains. Generative AI pattern tools can add aesthetic, historical, and cultural value to current lacemaking practitioners and functional value in renewed scientific and engineering relevance of lace when enabled by mechanized manufacturing pipelines. In preparing this concept paper, we used Undermind and ChatGPT LLMs to help with literature search and to evaluate the porosity of different lace patterns shown in Figure 8 and optimistically predict that, once developed, generative AI models can be used to weave complex multidimensional textiles just as they now weave complex narratives.
Many of the handmade bobbin lace samples depicted in this paper’s figures are modified from patterns in Jo Edkins’ Online Bobbin Lace School (https://www.theedkins.co.uk/jo/lace/). Her tutorials and documentation have been a valuable resource in educating new lacemakers and helping to keep the lacemaking craft tradition alive. This work has been done with support from the MIT Generative AI Impact Award. We thank Advanced Functional Fabrics of America (AFFOA) for access to a Shima Seiki knitting machine.
Aishwariya, S., and B. Ramyabharathi. “Lace through Time: Exploring History, Types, Materials, Motifs, Innovations and Designing Lace for Non-Textile Products.” Man-Made Textiles in India 11, (November 2023): 367–72. https://www.researchgate.net/publication/378215123_Lace_through_time_Exploring_history_types_materials_motifs_innovations_and_designing_lace_for_non-textile_products.
Alberghini, Matteo, Seongdon Hong, L. Marcelo Lozano, Volodymyr Korolovych, Yi Huang, Francesco Signorato, S. Hadi Zandavi, et al. “Sustainable Polyethylene Fabrics with Engineered Moisture Transport for Passive Cooling.” Nature Sustainability 4 (2021): 715–24. https://doi.org/10.1038/s41893-021-00688-5.
Alessandrina. “Lace Meets Tuck on Brother Machines.” Last modified December 5, 2020. https://alessandrina.com/2020/12/05/lace-meets-tuck-on-brother-machines/.
Anaraki, Nazanin Alsadat Tabatabaei. “Fashion Design Aid System with Application of Interactive Genetic Algorithms.” In Computational Intelligence in Music, Sound, Art and Design, edited by João Correia, Vic Ciesielski, and Antonios Liapis, 289–303. Switzerland: Springer, Cham, 2017. https://doi.org/10.1007/978-3-319-55750-2_20.
Asor, Shahar, and Yoav Sterman. “A Parametric 3D Knitting Workflow for Punchcard Knitting Machines.” Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems, no. 10 (April 2023): 1–6. https://doi.org/10.1145/3544549.3585721.
Baldrati, Alberto, Davide Morelli, Giuseppe Cartella, Marcella Cornia, Marco Bertini, and Rita Cucchiara. “Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing.” arXiv (April 2023). https://doi.org/10.48550/arXiv.2304.02051.
Beecroft, M. “3D Printing of Weft Knitted Textile Based Structures By Selective Laser Sintering of Nylon Powder.” IOP Conference Series: Materials Science and Engineering 137, no. 1 (2016): 12017. https://doi.org/10.1088/1757-899X/137/1/012017.
Beg, Mohammad Akif, and Jia Yuan Yu. “Generating Embroidery Patterns Using Image-to-Image Translation.” arXiv (March 2020). https://doi.org/10.48550/arXiv.2003.02909.
Business of Fashion and McKinsey & Company. “The Year Ahead: How Gen AI Is Reshaping Fashion’s Creativity.” Business of Fashion, December 18, 2023. https://www.businessoffashion.com/articles/technology/the-state-of-fashion-2024-report-generative-ai-artificial-intelligence-technology-creativity/.
Cao, Shidong, Wenhao Chai, Shengyu Hao, Yanting Zhang, Hangyue Chen, and Gaoang Wang. “DiffFashion: Reference-based Fashion Design with Structure-aware Transfer by Diffusion Models.” arXiv (February 2023). https://doi.org/10.48550/arXiv.2302.06826.
Carucci, Elinor, and Sara Bader. The Collars of RBG: A Portrait of Justice. New York: Clarkson Potter, 2023.
Chen, Chien-Chang, and Daniel C. Chen. “Multi-resolutional Gabor Filter in Texture Analysis.” Pattern Recognition Letters 17, no. 10 (September 1996): 1069–76. https://doi.org/10.1016/0167-8655(96)00065-7.
Date, Prutha, Ashwinkumar Ganesan, and Tim Oates. “Fashioning with Networks: Neural Style Transfer to Design Clothes.” arXiv (July 2017). https://doi.org/10.48550/arXiv.1707.09899.
Dcarsprungli. “Ada Lovelace: Weaving Algebraic Patterns Like Looms Weave Flowers and Leaves.” The Good Times, June 28, 2022. https://www.the-good-times.org/people-2/ada-lovelace-weaving-algebraic-patterns-like-looms-weave-flowers-and-leaves/.
Desai, J., B. Bandyopadhyay, and C. D. Kane. “Neural Network Based Fabric Classification and Blend Composition Analysis.” Proceedings of IEEE International Conference on Industrial Technology 2000 1 (January 2000): 231–6. https://doi.org/10.1109/ICIT.2000.854137.
Forman, Jack, Mustafa Doga Dogan, Hamilton Forsythe, and Hiroshi Ishii. “DefeXtiles: 3D Printing Quasi-Woven Fabric Via Under-Extrusion.” Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (October 2020): 1222–33. https://doi.org/10.1145/3379337.3415876.
Freedgood, Elaine. “‘Fine Fingers’: Victorian Handmade Lace and Utopian Consumption.” Victorian Studies 45, no. 4 (Summer 2003): 625–47. http://www.jstor.org/stable/3829530.
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. “Generative Adversarial Nets.” In NIPS'14: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, edited by Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, 2672–80. Cambridge, MA: MIT Press, 2014. https://dl.acm.org/doi/10.5555/2969033.2969125.
Guler, R. A., S. Tari, and G. Unal. “Landmarks Inside the Shape: Shape Matching Using Image Descriptors.” Pattern Recognition 49 (January 2016): 79–88. https://doi.org/10.1016/j.patcog.2015.07.013.
H. Bharati, Manish H., J. Jay Liu, and John F. MacGregor. “Image Texture Analysis: Methods and Comparisons.” Chemometrics and Intelligent Laboratory Systems 72, no. 1 (June 2004): 57–71. https://doi.org/10.1016/j.chemolab.2004.02.005.
Hsiao, Wei-Lin, Isay Katsman, Chao-Yuan Wu, Devi Parikh, and Kristen Grauman. “Fashion++: Minimal Edits for Outfit Improvement.” 2019 IEEE/CVF International Conference on Computer Vision (ICCV) (October 2019): 5046–55. https://doi.org/10.1109/ICCV.2019.00515.
Hyland, Sabine. “Writing with Twisted Cords: The Inscriptive Capacity of Andean Khipus.” Current Anthropology 58, no. 3 (April 2017): 412–19. https://doi.org/10.1086/691682.
Jiang, Xingyu, Jiayi Ma, Guobao Xiao, Zhenfeng Shao, and Xiaojie Guo. “A Review of Multimodal Image Matching: Methods and Applications.” Information Fusion 73 (September 2021): 22–71. https://doi.org/10.1016/j.inffus.2021.02.012.
Jones, Ann Rosalind. “Labor and Lace: The Crafts of Giacomo Franco’s Habiti delle donne venetiane.” I Tatti Studies 17, no. 2 (Fall 2014): 399–425. https://doi.org/10.1086/678268.
Karagoz, Halil Faruk, Gulcin Baykal, Irem Arikan Eksi, and Gozde Unal. “Textile Pattern Generation Using Diffusion Models.” arXiv (April 2023). https://doi.org/10.48550/arXiv.2304.00520.
Karlin, Jacob, and Lily Homer. “Searching for Spanier Arbeit.” Protocols, 2020. https://prtcls.com/article/karlin-homer_searching-for-spanier-arbeit/.
Karmon, Ayelet, Yoav Sterman, Tom Shaked, Eyal Sheffer, and Shoval Nir. “KNITIT: A Computational Tool for Design, Simulation, and Fabrication of Multiple Structured Knits.” Proceedings of the 2nd ACM Symposium on Computational Fabrication, no. 4 (June 2018): 1–10. https://doi.org/10.1145/3213512.3213516.
Karras, Tero, Samuli Laine, and Timo Aila. “A Style-Based Generator Architecture for Generative Adversarial Networks.” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2019): 4396–405. https://doi.org/10.1109/CVPR.2019.00453.
Karras, Tero, Samuli Laine, Miika Aittala, Janne Hellsten, Jaakko Lehtinen, and Timo Aila. “Analyzing and Improving the Image Quality of StyleGAN.” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2020): 8107–16. https://doi.org/10.1109/CVPR42600.2020.00813.
Kaspar, Alexandre, Liane Makatura, and Wojciech Matusik. “Knitting Skeletons: A Computer-Aided Design Tool for Shaping and Patterning of Knitted Garments.” Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (April 2019): 53–65. https://doi.org/10.1145/3332165.3347879.
Kaspar, Alexandre, Tae-Hyun Oh, Liane Makatura, Petr Kellnhofer, Jacqueline Aslarus, and Wojciech Matusik. “Neural Inverse Knitting: From Images to Manufacturing Instructions.” arXiv (February 2019). https://doi.org/10.48550/arXiv.1902.02752.
Katwala, Amit. “The Monstrous Crochet Creations of ChatGPT.” Wired, August 15, 2023. https://www.wired.com/story/chatGPT-crochet
Kim, Hee-Su, and Sung-Bae Cho. “Application of Interactive Genetic Algorithm to Fashion Design.” Engineering Applications of Artificial Intelligence 13, no. 6 (December 2000): 635–44. https://doi.org/10.1016/S0952-1976(00)00045-2.
Kim, Na Yeon, Yunhee Shin, and Eun Yi Kim. “Emotion-Based Textile Indexing Using Neural Networks.” In Human-Computer Interaction. HCI Intelligent Multimodal Interaction Environments, edited by Julie A. Jacko, 349–57. Berlin: Springer, 2007. https://doi.org/10.1007/978-3-540-73110-8_37.
Koptelov, Anatoly, Adam Thompson, Stephen R. Hallett, and Bassam El Said. “A Deep Learning Approach for Predicting the Architecture of 3D Textile Fabrics.” Materials & Design 239 (March 2024): 112803. https://doi.org/10.1016/j.matdes.2024.112803.
Lehrecke, August, Cody Tucker, Xiliu Yang, Piotr Baszynski, and Hanaa Dahy. “Tailored Lace: Moldless Fabrication of 3D Bio-Composite Structures through an Integrative Design and Fabrication Process.” Applied Sciences 11, no. 22 (November 2021): 10989. https://doi.org/10.3390/app112210989.
Li, Jian, Shangping Zhong, and Kaizhi Chen. “SStyleGAN: A StyleGAN Model for Generating Symmetrical Lace Images.” Proceedings Volume 13105, International Conference on Computer Graphics, Artificial Intelligence, and Data Processing (ICCAID 2023) 131050G (March 2024). https://doi.org/10.1117/12.3026714.
Li, Yuncheng, Liangliang Cao, Jiang Zhu, and Jiebo Luo. “Mining Fashion Outfit Composition Using an End-to-End Deep Learning Approach on Set Data.” IEEE Transactions on Multimedia 19, no. 8 (August 2017): 1946–55. https://doi.org/10.1109/TMM.2017.2690144.
Lin, H., A. C. Long, M. Sherburn, and M. J. Clifford. “Modelling of Mechanical Behaviour for Woven Fabrics under Combined Loading.” International Journal of Material Forming 1, no. suppl. 1 (2008): 899–902. https://doi.org/10.1007/s12289-008-0241-7.
Lin, Hua, Martin Sherburn, Jonathan Crookston, Andrew C. Long, Mike J. Clifford, and I. Arthur Jones. “Finite Element Modelling of Fabric Compression.” Modelling and Simulation in Materials Science and Engineering 16, no. 3 (2008): 35010. https://doi.org/10.1088/0965-0393/16/3/035010.
Liu, Linlin, Haijun Zhang, Yuzhu Ji, and Q. M. Jonathan Wu. “Toward AI Fashion Design: An Attribute-GAN Model for Clothing Match.” Neurocomputing 341 (May 2019): 156–67. https://doi.org/10.1016/j.neucom.2019.03.011.
Liu, Xin, Xiao-Yi Zhou, Bangde Liu, and Chenglin Gao. “Multiscale Modeling of Woven Composites by Deep Learning Neural Networks and Its Application in Design Optimization.” Composite Structures 324 (November 2023): 117553. https://doi.org/10.1016/j.compstruct.2023.117553.
Lucas, Bruce D., and Takeo Kanade. “An Iterative Image Registration Technique with an Application to Stereo Vision.” International Joint Conference on Artificial Intelligence (August 1981). https://api.semanticscholar.org/CorpusID:2121536.
Makovicky, Nicolette. “‘Something to Talk About’: Notation and Knowledge-Making Among Central Slovak Lace-Makers.” Journal of the Royal Anthropological Institute 16 (May 2010): S80–S99. http://www.jstor.org/stable/40606066.
McCann, James, Lea Albaugh, Vidya Narayanan, April Grow, Wojciech Matusik, Jennifer Mankoff, and Jessica Hodgins. “A compiler for 3D machine knitting.” ACM Transactions on Graphics 35, no. 4 (July 2016): 1–11. https://doi.org/10.1145/2897824.2925940.
Moestopo, Widianto P., Arturo J. Mateos, Ritchie M. Fuller, Julia R. Greer, and Carlos M. Portela. “Pushing and Pulling on Ropes: Hierarchical Woven Materials.” Advanced Science 7, no. 20 (October 2020): 2001271. https://doi.org/10.1002/advs.202001271.
Mok, P. Y., Jie Xu, X. X. Wang, J. T. Fan, Y. L. Kwok, and John H. Xin. “An IGA-Based Design Support System for Realistic and Practical Fashion Designs.” Computer-Aided Design 45, no. 11 (November 2013): 1442–58. https://doi.org/10.1016/j.cad.2013.06.014.
Narayanan, Vidya, Lea Albaugh, Jessica Hodgins, Stelian Coros, and James Mccann. “Automatic Machine Knitting of 3D Meshes.” ACM Transactions on Graphics 37, no. 3 (June 2018): 1–15. https://doi.org/10.1145/3186265.
Pandey, Nilesh, and Andreas Savakis. “Poly-GAN: Multi-conditioned GAN for Fashion Synthesis.” Neurocomputing 414 (November 2020): 356–64. https://doi.org/10.1016/j.neucom.2020.07.092.
Patowary, Kaushik. “That Time When Computer Memory Was Handwoven by Women.” Amusing Planet, February 4, 2020. https://www.amusingplanet.com/2020/02/that-time-when-computer-memory-was.html.
Pizer, Stephen M., E. Philip Amburn, John D. Austin, Robert Cromartie, Ari Geselowitz, Trey Greer, Bart ter Haar Romeny, John B. Zimmerman, and Karel Zuiderveld. “Adaptive Histogram Equalization and Its Variations.” Computer Vision, Graphics, and Image Processing 39, no. 3 (September 1987): 355–68. https://doi.org/10.1016/S0734-189X(87)80186-X.
Poincloux, Samuel, Mokhtar Adda-Bedia, and Frédéric Lechenault. “Crackling Dynamics in the Mechanical Response of Knitted Fabrics.” Physical Review Letters 121, no. 5 (July 2018): 58002. https://doi.org/10.1103/PhysRevLett.121.058002.
Postrel, Virginia. The Fabric of Civilization: How Textiles Made the World. New York: Basic Books, 2020.
Sarmiento, James-Andrew R. “Exploiting Latent Codes: Interactive Fashion Product Generation, Similar Image Retrieval, and Cross-Category Recommendation using Variational Autoencoders.” arXiv (September 2020). https://doi.org/10.48550/arXiv.2009.01053.
Scheidt, Fabian, Jifei Ou, Hiroshi Ishii, and Tobias Meisen. “deepKnit: Learning-Based Generation of Machine Knitting Code.” Procedia Manufacturing 51 (2020): 485–92. https://doi.org/10.1016/j.promfg.2020.10.068.
Shane, Janelle. You Look Like a Thing and I Love You: How Artificial Intelligence Works and Why It’s Making the World a Weirder Place. New York: Voracious, 2019.
Takahashi, Haruki, and Jeeeun Kim. “3D Printed Fabric: Techniques for Design and 3D Weaving Programmable Textiles.” Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (October 2019): 43–51. https://doi.org/10.1145/3332165.3347896.
TheCatherine. “Words That Scream AI’s Hand in Writing and How to Avoid Them.” Wealthy Affiliate, January 21, 2024. https://my.wealthyaffiliate.com/thecatherine/blog/words-that-scream-ais-hand-in-writing-and-how-to-avoid-them.
University of Arizona. “Digital Archive of Documents Related to Lace.” Last modified July 27, 2018. https://www2.cs.arizona.edu/patterns/weaving/lace.html.
Verpoest, Ignaas, and Stepan V. Lomov. “Virtual Textile Composites Software WiseTex: Integration with Micro-mechanical, Permeability and Structural Analysis.” Composites Science and Technology 65, no. 15 (December 2005): 2563–74. https://doi.org/10.1016/j.compscitech.2005.05.031.
Wang, Hanying, Haitao Xiong, and Yuanyuan Cai. “Image Localized Style Transfer to Design Clothes Based on CNN and Interactive Segmentation.” Computational Intelligence and Neuroscience 2020 (December 2020): 8894309. https://doi.org/10.1155/2020/8894309.
Wang, Haosha, Joshua De Haan, and Khaled Rasheed. “Style-Me – An Experimental AI Fashion Stylist.” In Trends in Applied Knowledge-Based Systems and Data Science, edited by Hamido Fujita, Moonis Ali, Ali Selamat, Jun Sasaki, and Masaki Kurematsu, 553–61. Berlin: Springer, Cham, 2016. https://doi.org/10.1007/978-3-319-42007-3_48.
Wirth, M., K. Shea, and T. Chen. “3D-Printing Textiles: Multi-Stage Mechanical Characterization of Additively Manufactured Biaxial Weaves.” Materials & Design 225 (2023): 111449. https://doi.org/10.1016/j.matdes.2022.111449.
Wu, Kui, Hannah Swan, and Cem Yuksel. “Knittable Stitch Meshes.” ACM Transactions on Graphics 38, no. 1 (February 2019): 1–13. https://doi.org/10.1145/3292481.
Wu, Kui, Xifeng Gao, Zachary Ferguson, Daniele Panozzo, and Cem Yuksel. “Stitch Meshing.” ACM Transactions on Graphics 37, no. 4 (August 2018): 1–14. https://doi.org/10.1145/3197517.3201360.
Wu, Xiaopei, and Li Li. “An Application of Generative AI for Knitted Textile Design in Fashion.” Design Journal 27, no. 2 (March 2024): 270–90. https://doi.org/10.1080/14606925.2024.2303236.
Xu, Jie, P. Y. Mok, C. W. M. Yuen, and R. W. Y. Yee. “A Web-Based Design Support System for Fashion Technical Sketches.” International Journal of Clothing Science and Technology 28, no. 1 (March 2016): 130–60. https://doi.org/10.1108/IJCST-03-2015-0042.
Yang, Cheng, Yuliang Zhou, Bin Zhu, Chunyang Yu, and Lingang Wu. “Emotionally Intelligent Fashion Design Using CNN and GAN.” Computer-Aided Design and Applications 18, no. 5 (January 2021): 900–13. https://doi.org/10.14733/cadaps.2021.900-913.
Yildirim, Gökhan, Nikolay Jetchev, Roland Vollgraf, and Urs Bergmann. “Generating High-Resolution Fashion Model Images Wearing Custom Outfits.” 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (October 2019): 3161–4. https://doi.org/10.1109/ICCVW.2019.00389