Skip to main content
SearchLoginLogin or Signup

Impact of Generative AI on the Creative Economy

The transformative potential of generative artificial intelligence (AI) in creative society demands a nuanced approach to address its potential impacts on creators. Our first project aims to understand the preferences of different creators to have their work receive attribution.

Published onSep 29, 2024
Impact of Generative AI on the Creative Economy
·

1. Introduction

The transformative potential of generative artificial intelligence (AI) in creative society demands a nuanced approach to address its potential impacts on creators. Our first project aims to understand the preferences of different creators to have their work receive attribution. Initial attempts to understand what is ethical and desirable have either surveyed artists and rightsholders [1] or speculated about their preferences based on existing academic sources [2][3][4][5]. These surveys and analyses represent a skewed sample of artists/rightsholders and are only able to investigate the stated preferences of these artists. Yet, it is well-known that the stated preferences of individuals does not always align with their actions [6][7]. Thus, investigating the revealed preferences of artists and rightsholders would provide a clearer understanding of the ethics of ingestion and generative outputs. Our first project takes a step in this direction. Our results reveal that many creators of visual works on the Internet prefer select licenses that require attribution, whereas creators of software select licenses that allow for commercial use. While these licenses have no bearing on fair use adjudication in the legal system, they do present an ethical case for attribution method. Next, we turn our attention to state-of-the-art attribution and unlearning methods for text-to-image models. We uncover that although current unlearning methods appear to unlearn according to certain metrics, when models are fine-tuned on unrelated datasets, they relearn the supposedly ‘unlearned’ concepts. Our results suggest an important new direction of future work should be on understanding and improving the robustness of modern unlearning algorithms to fine-tuning.

2. Revealed Preferences of Pre-authorized Licenses

Generative AI models, which create text, images, audiovisual works, and other multimedia resembling human creativity, introduce new challenges and prospects for artists and rightsholders. As AI systems become more sophisticated, their ability to autonomously generate works indistinguishable from those created by humans raises critical questions about copyright protection (originality, authorship, direct and indirect liability, defenses, and remedies), the industrial organization of creative industries, and ultimately social justice in a post–generative AI world.

Ongoing lawsuits between generative AI developers (such as OpenAI, Google, Meta, Stable Diffusion) and rightsholders (such as The New York Times, Getty Images, and author classes broadly) have amplified debate in the legal community about the propriety of ingestion of copyright-protected Internet ‘data’ without authorization and the generation of outputs remixing such ‘data’ [8][9]. Although the legal question of what constitutes fair use is central to how these tools will develop, we focus here on the ethical question of what it would mean to respect artist and rightsholder preferences.

Our work takes a step toward understanding this ethical landscape by examining the revealed preferences of creators as reflected in open and quasi-open licensing regimes. We analyze the most commonly used licenses by copyright holders of images in the Creative Commons (CC) and copyright holders of code in GitHub code repositories. We discuss the ramifications that these licenses might have on the existing generative AI training models. Finally, we discuss the technical affordances needed from the AI community to meet the artists’ and rightsholders’ license conditions.

Table 1

License

Attribution Required

Remixing Allowed

Commercial Use Allowed

CC BY

CC BY-NC

CC BY-SA

CC BY-NC-SA

CC BY-ND

CC BY-NC-ND

CC0

Figure 1

Breakdown of CC licenses used by images on openverse.org.

2.1. Analysis

2.1.1. Images (CC)

The CC is an international nonprofit organization that was established in 2001 with the mission of enabling easier and more ethical use of copyrighted works. The CC organization generally reflects an open philosophy, although it offers users a range of preauthorized licensing options. Table 1 presents the the six CC licenses. The salient features of the licenses for generative models are: waiver of rights (CC0), attribution (BY), authorization for editing or remixing (preparation of derivative works; ND), authorization for commercial use (NC), and requirement to share alike (SA). Licensors may waive all rights or preauthorize usage with one or more reservations of rights.

There are currently over 2.5 billion works across the Internet that use CC licenses. These span text, audio, and images. Many of these works are scraped by Common Crawl [10], a commonly used tool for obtaining Internet-based data, and subsequently used as training data for most generative models. For our analysis, we will focus on CC-licensed images curated by Openverse (openverse.org). Openverse sources over 700 million CC-licensed images from open application programming interfaces (APIs) (e.g., Flickr) and Common Crawl. From this database of images sourced by Openverse and engineers at Openverse, we calculate the breakdown of the CC licenses used (Figure 1).

The preferences revealed by this breakdown present important ethical considerations for training generative models on these images and generating outputs based on them. First, more than 90% of theses licenses require attribution. Additionally, the majority of licenses chosen do not allow commercial uses or remixing of the copyright images.

Table 2. Open-source software licenses and their requirements.

Table 2

License

Copyright Notice

Modification Allowed

Commercial Use Allowed

Same License

MIT

GPLv2

Apache

GPLv3

BSD 3-clause

BSD 2-clause

LGPLv3

AGPLv3

2.1.2. Open-Source Code (GitHub)

GitHub is a cloud-based platform that allows developers to store Git repositories for their code. It is the most popular platform for open-sourcing code throughout the software community. We present the most commonly used open-source licenses and their differences in Table 2. Similar to CC-licensed images, we focus on the most salient permissions and conditions of the licenses for generative models: whether license and copyright notices are required, if modification of the code is allowed, and whether commercial use is allowed. Notably, all of the licenses allow commercial use.

As of May 2024, over 420 million repositories are stored on GitHub. Lacking direct API access, we focus on an analysis of license usage conducted by GitHub in 2015 [11]. The breakdown presented in Figure 2 indicates a much different set of preferences than the CC-licensed breakdown. The MIT and Apache licenses have similar permissions and conditions in regard to those that will have the most impact on generative model training and outputs.

2.2. Potential Impact on Generative AI

We briefly discuss some of the potential impacts based on these revealed preferences.

State of the art and commonly used generative models for image generation such as DALLE [12], Stable Diffusion [13], and Midjourney [14] were trained on images gathered from Common Crawl, many of which are CC-licensed images captured in the Openverse database. As evidenced in Figure 1, many of the CC-licensed images do not allow commercial use. What is currently unclear is how these individual licenses affect the overall use of a generative model. For example, does the use of these CC-licensed images with an NC license in training mean that these models are not allowed to be distributed for commercial use? If so, absent a determination that such training and associated outputs constituted fair use, OpenAI and StabilityAI are already in violation of these licenses due to the commercial nature of their activities. In contrast with the impact of CC on image generation models, the majority of licenses used for GitHub repositories that have been cited as a primary source for training code generation models such as GPT-4 [15] and CoPilot [16] allow commercial use. Thus, there is no impact on the monetization of GPT-4 and CoPilot for these purposes.

Across both regimes, it is clear that attribution is an important technical affordance, whether to satisfy the CC license conditions, to understand what copyright notices would need to be included for generated code, or to ensure that generated code has the same license in cases where the outputs are based on code that has General Public License (GPL). Similarly, many images use a CC ND license, which prohibits generation of derivative works. How this applies to image generation models is dependent on how we view the mechanism by which outputs are created. If we view the mechanism as simply an interpolation of the training data, that interpretation would bar most outputs that are similar to images restricted by a CC ND license.

3. Failure Modes of Unlearning Copyrighted Data

The key ingredient enabling recent progress in generative AI has been scale: Modern generative models are parameterized by deep neural networks with billions of parameters and are trained on massive, Web-scale datasets. This presents novel challenges; for example, it is difficult to ensure the quality of the training data, and it becomes infeasible to retrain models from scratch. This presents a particular concern in the context of generative image models like Stable Diffusion [17][18][19], which may reproduce copyrighted material, generate offensive or explicit images, or compromise the privacy of individuals in the training corpus [19].

Figure 2

Breakdown of OSS licenses used by repos on GitHub.

These challenges have motivated work on machine unlearning, or efficiently updating a trained model to ‘forget’ portions of its training data [20], [21]. In this work, we focus on ‘concept unlearning’ in the context of text-to-image diffusion models; for example, inducing a model to forget how to generate explicit content (rather than unlearning specific training examples). Although there has been substantial recent progress in this area, state-of-the-art methods struggle to unlearn more than a few dozen concepts before degrading overall model performance [22].

3.1. Contributions

We extend recent work by [22] on ‘MACE,’ or mass concept erasure in text-to-image diffusion models. First, we carefully replicate the results of [22] on a celebrity image benchmark and investigate a key limitation of their approach: namely, that erasing a given concept (e.g., the ability to generate an image of Adam Driver) will also degrade model performance on other, similar-sounding concepts (e.g., Adam Sandler; see Figure 3). We then implement alternative sampling schemes—or the way in which timesteps in the diffusion model’s trajectory are selected for concept unlearning—and demonstrate that the choice of sampling scheme has little effect on unlearning performance. Finally, we investigate a method that we term ‘forget-then-finetune’ for mitigating model degradation, which involves finetuning the diffusion model on a small set of retained concepts after concept unlearning. We find that although this approach does slightly improve performance on retained concepts—even those the model was not finetuned on—this comes at the cost of a degradation in the erasure performance. This is a significant and, to our knowledge, previously unknown limitation of MACE in settings where a deployed model must be dynamically updated to both erase existing concepts (e.g., in response to new copyright claims) and learn new concepts (e.g., as incremental training data becomes available over time). Because enabling incremental model updates is the primary motivation for machine unlearning, this apparent sensitivity to finetuning suggests a critical and thus far overlooked avenue for future research in machine unlearning.

Figure 3

MACE induces Stable Diffusion (v1.4) to ‘forget’ how to generate an image of Adam Driver (top panel). However, despite claims to the contrary [21], the model’s ability to generate an image of a similar-sounding celebrity is also degraded significantly (bottom panel).

We build on a growing literature on machine unlearning, which develops methods for efficiently inducing a trained machine learning model to ‘forget’ some portion of its training data. In the context of classical discriminative models, machine unlearning is often motivated by a desire to preserve the privacy of individuals who may appear in the training data. A key catalyst for this work was the introduction of Article 17 of the European Union General Data Protection Regulation, which preserves an individual’s ‘right to be forgotten’ [23]. More recent work in machine unlearning has expanded to include modern generative AI models, which may reproduce copyrighted material, generate offensive or explicit content, or leak sensitive information that appears in their training data [19]. Our work focuses specifically on unlearning in the context of text-to-image diffusion models, beginning with the seminal works of [17] and [18]. The literature on diffusion models has grown rapidly over the last few years; although we cannot provide a comprehensive overview here, we refer to [19] for an excellent recent survey.

Our work is directly inspired by [21] and [22], which propose methods for inducing models to forget abstract concepts (as opposed to simply unlearning specific training examples). Our ‘forget-then-finetune’ method (detailed in Section 3.3) is also conceptually inspired by [24], which proposes a generic framework for ‘unbounded unlearning,’ or unlearning a large number of concepts or training examples without degrading model performance. Finally, a key ingredient in our approach, and that of [22], is the notion of low-rank adaptation (LoRA), a method initially proposed to enable the efficient fine-tuning of large language models [25]. In our context, we first make use of LoRA finetuning to efficiently erase concepts from a baseline text-to-image diffusion model and then further apply LoRA finetuning to avoid model degradation on retained concepts.

3.2.1. MACE: Mass Concept Erasure in Diffusion Models

Our work is most closely related to MACE [22], which establishes the state of the art in concept unlearning for diffusion models. At a high level, MACE finetunes a model to erase certain target phrase (e.g., ‘an image of a ship’) and related concepts (e.g., ‘an image of a boat’) through a combination of cross-attention refinement and LoRA [25]. Cross-attention refinement replaces the ‘key’ embeddings associated with each token in the target phrase with the corresponding ‘key’ embedding in a more generic phrase. For example, the ‘key’ vector corresponding to the token ‘image’ in ‘an image of a ship’ might be replaced with the corresponding embedding in ‘an image of an object’; this is to ensure the model does not embed residual information about the erased concept in the embeddings of any token that co-occurs with the target phrase. The second step, LoRA finetuning, perturbs the weights of the model to minimize activations in regions that correspond to the target phrase; these regions are identified by segmenting the image using Grounded-SAM [26][27]. These perturbations are learned via LoRA of the model parameters [25]. During this process, importance sampling selects timesteps for fine-tuning between t1 = 200 and t2 = 400 in the diffusion trajectory (Equation 5 in [22]). Finally, the LoRA modules corresponding to each erased concept are combined to produce a final model by formulating the ‘fusion’ of multiple LoRA modules as a quadratic programming problem. For additional detail on MACE, we refer to [22].

3.3. Methods and Evaluation

In this section, we discuss our proposed methodology to address the issue of erasing overlapping concepts. Additionally, we describe our evaluation, using the ability to ‘unlearn’ the faces of celebrities as a natural benchmark for concept unlearning [22].

We investigate what we term ‘forget-then-finetune.’ Specifically, we alter MACE [22] by adding an additional finetuning step in which, after erasing a set of concepts, we continue to train the diffusion model on a set of images the model is expected to retain. Our hypothesis, motivated by the work of [24], is that finetuning on a small set of retained concepts may serve to mitigate this issue with only a modest increase in the computational cost of unlearning.

As in the prior section, we focus on celebrity erasure from Stable Diffusion 1.4 as a benchmark and begin by replicating MACE to produce three variants of Stable Diffusion 1.4 that have unlearned 1, 5, and 10 celebrities, respectively. We then curate four different datasets of celebrity images to evaluate whether finetuning each model improves their ability to retain concepts that are unrelated to the concepts being erased. The first dataset consists of 250 images of 10 celebrities (i.e., 25 of each unique celebrity) chosen arbitrarily from the list of celebrities included in the training set curated by [22]. These images are sampled from Stable Diffusion 1.4 in a variety of styles, using prompts adopted from [22]; for example, one such prompt might be ‘an oil painting of Barack Obama.’ We provide the full set of prompts in Appendix B. The second dataset is similar to the first but is curated to include celebrities whose names are similar to those being erased. For example, if the model has been trained to forget Adam Driver, the finetuning set might be curated to include Adam Sandler; this choice is motivated by the observation that erasing one celebrity will cause the model to also forget how to generate similar-sounding celebrities. We generate these pairs of similar-sounding celebrities by prompting ChatGPT (using GPT-4) to provide a set of celebrities whose names sound similar to those in the erased set. Finally we produce a third and fourth finetuning set that are curated in an identical manner to the first but are much larger, containing 2,225 images. The fourth dataset was generated after observing the results of the first three experiments and additionally uses a different set of five prompts (e.g., ‘generate a watercolor image of...’ instead of ‘generate an official portrait of...’) to test whether our results were merely an artifact of the image styles. The largest of these datasets required four A100 hours to produce; we provide all prompts and other details needed to replicate our dataset curation process in Appendix B.

On each of these datasets, we finetune the ‘erase 1,’ ‘erase 5,’ and ‘erase 10’ variants of Stable Diffusion for 1,000 steps. To ensure that this approach remains computationally feasible, we finetune using LoRA. We freeze the model weights and instead optimize over a low rank decomposition of the parameter matrices (see [25] for details). We adopt the default hyperparameters recommended in the HuggingFace diffusers tutorial for finetuning text-to-image diffusion models [28]. In particular, we use a learning rate of .0001 with a constant schedule; the Adam optimizer with β1, β2, weight decay, and epsilon of .9, .999, .01, and 1e–8, respectively; a max gradient norm of 1; and addition of random image flips and cropping to augment the training dataset.

3.4. Evaluation

We evaluate the performance of these models across four different metrics. The MACE models are the baseline. To measure sample quality, we compute the Frechet Inception Distance (FID) [29] compared to the sample of 30,000 Microsoft COCO images [30] used in [22]. For semantic alignment, we measure the CLIP score (i.e., cosine similarity in embedding space) between the celebrity name used to generate the image. Finally, we use the GIPHY Celebrity Detector (GCD) [31] to assess the ability of the model to erase and retain concepts. We use the same metrics as [22], where erasure accuracy (efficacy) is the GCD accuracy on the generations of the erased concepts and retain accuracy (specificity) is the GCD accuracy on the generations of the retained concepts. We compare these four metrics across the number of celebrity concepts erased (1, 5, and 10) to understand how scaling the algorithm is impacted. The celebrity concepts erased in each task can be found in Appendix A and the retained ones in Table 7 of [22].

3.5. Results

We present the efficacy, specificity, CLIP score, and FID measures for each variant of forget-then-finetune in Figure 4. We find that although finetuning can slightly improve the model’s ability to generate retained celebrities, doing so also improves the model’s ability to generate erased celebrities; both are more pronounced on the larger finetuning sets and when more celebrities are initially erased. This is a striking fact, because the models are not finetuned on any celebrities that were erased (or indeed, any celebrities that appear in the evaluation set). An example of this phenomenon is presented in Figure 5—although MACE succeeds in erasing ‘Angelina Jolie,’ a modest degree of finetuning on unrelated celebrities substantially reverses this gain. This indicates that information related to the erased concept is retained by the model and that this information can be ‘restored’ via finetuning. In Figure 6, we track this phenomenon over the course of finetuning on the ‘large’ dataset and show that it can occur even when finetuning does not substantially improve model performance on the retained set. Taken together, these results highlight a critical limitation of MACE, which appears to prevent its application in settings where the model must incrementally learn new concepts (in addition to forgetting existing ones). Although we do not have a low-level mechanistic understanding of this phenomenon, we conjecture that it is related to the fact that [22] defined concept erasure as the replacement of a specific concept (e.g., ‘Angelina Jolie’) with a generic super category (e.g., ‘a woman’ or ‘a person’) rather than, for example, random noise or a blank image (recall the cross-attention refinement step). Conceptually, this means that finetuning to erase a concept might shift the learned embeddings for a given target phrase along a latent ‘specific-to-generic’ axis, and finetuning on additional specific concepts serves to partially reverse this effect. We leave further investigation of this phenomenon to future work.

Figure 4

The performance of each variant of forget-then-finetune relative to the MACE baseline. ‘Random’ and ‘similar’ are finetuned on 250 celebrity images, either selected at random or curated to include names that are similar to those in the erased set. ‘Large’ and ‘alternate’ are corresponding results after finetuning on larger datasets of 2,225 random celebrity images; ‘alternate’ additionally varies the prompts as described in Section 3.3. Efficacy, specificity, CLIP score, and FID are defined as in Section 3.4.

Figure 5

Comparing images generated by Stable Diffusion v1.4 (left) after applying MACE [21] (center) and after further finetuning the model on the ‘large’ dataset (see Figure 4). Although MACE succeeds in erasing Angelina Jolie, finetuning on unrelated celebrities partially recovers the erased concept.

Figure 6

The performance of the erase 10 model over the course of finetuning on the ‘large’ dataset. Although performance on the retained set remains approximately constant (right panel), the model’s ability to generate erased concepts increases sharply (left panel). Efficacy and specificity are defined as in Section 3.4.

Appendix A: MACE

Following [22], we define the erase 1 model as the one that erases Melania Trump; the erase 5 model as one that erases Adam Driver, Adriana Lima, Amber Heard, Amy Adams, and Andrew Garfield; and the erase 10 model as one that erases the previous five celebrities and Angelina Jolie, Anjelica Huston, Anna Faris, Anna Kendrick, and Anne Hathaway. The retained concepts can be found in Table 7 of [22]. Each model is produced by exactly replicating the results of [22]. We include the main hyperparameter details in Section 3.2; we refer to their work [22] and public codebase1 for additional details.

Appendix B: Forget-Then-Finetune

In this appendix, we provide additional details related to the dataset curation process for the forget-then-finetune method (Section 3.3). The first dataset, the ‘random’ dataset, includes 25 images of 10 distinct celebrities, chosen arbitrarily from those used in [22]. These celebrities are Arnold Schwarzenegger, Barack Obama, Beth Behrs, Bill Clinton, Bob Dylan, Bob Marley, Bradley Cooper, Bruce Willis, Bryan Cranston, and Cameron Diaz. For each celebrity, we generated five images for each of five prompts (25 total). These prompts were:

  1. “A portrait of [name]”

  2. “An image capturing [name] at a public event”

  3. “A sketch of [name]”

  4. “An oil painting of [name]”

  5. “[name] in an official photo”

For the ‘similar’ dataset, we curate a set of 11 celebrities whose names are similar to those erased by the erase 10 model and the erase 1 model. These celebrities were chosen by prompting ChatGPT (GPT-4) with the prompt “For each of the celebrities below, give me another celebrity with a similar sounding name. I do not want fictional characters. I want famous celebrities. If needed, feel free to pick a famous celebrity who shares a first name with the ones I provided.” These celebrities are Adam Levine, Adrienne Bailon, Amber Riley, Amy Schumer, Andrew Lincoln, Angélica Vale, Angelica Bridges, Anna Paquin, Anna Kournikova, Anne Heche, and Melanie Iglesias.

For the ‘large’ dataset, we generate 25 images each of 89 distinct celebrities, chosen by beginning with the full set of 100 used for the largest experiment in [22] and removing the 11 that appear in our erasure set. These celebrities are listed in Table 7 of [22]. For the ‘alternate’ dataset, we use the same set of celebrities but use a different set of five image styles. These image styles were generated by prompting ChatGPT (GPT-4) with the following prompt: “I am prompting Stable Diffusion to generate images of celebrities in different styles. For example, my prompts include the following: A portrait of Arnold Schwarzenegger, An image capturing Arnold Schwarzenegger at a public event, An oil painting of Arnold Schwarzenegger, A sketch of Arnold Schwarzenegger, Arnold Schwarzenegger in an official photo. Can you give me five more prompts which generate images in five new styles?” These five prompts are:

  1. “A watercolor painting depicting [name]”

  2. “A digital artwork of [name] at a public event”

  3. “A surrealistic artwork of [name]”

  4. “A charcoal drawing of [name]”

  5. “[name] in a noir-style black and white photograph”

Bibliography

  1. Lovato, Juniper, Julia Zimmerman, Isabelle Smith, Peter Dodds, and Jennifer L. Karson. “Foregrounding Artist Opinions: A Survey Study on Transparency, Ownership, and Fairness in AI Generative Art.” Preprint, submitted May 14, 2024. https://arxiv.org/html/2401.15497v4.

  2. Jiang, Harry H., Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. “AI Art and Its Impact on Artists.” In Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, edited by Francesca Rossi, Sanmay Das, Jenny Davis, Kay Firth-Butterfield, and Alex John, 363–74. New York: Association for Computing Machinery, 2023.

  3. Attard-Frost, Blair. Generative AI Systems: Impacts on Artists & Creators and Related Gaps in the Artificial Intelligence and Data Act, June 5, 2023. https://doi.org/10.2139/ssrn.4468637.

  4. Latikka, Rita, Jenna Bergdahl, Nina Savela, and Atte Oksanen. “AI as an Artist? A Two-Wave Survey Study on Attitudes Toward Using Artificial Intelligence in Art.” Poetics 101 (December 2023): 101839.

  5. Brunder, Kristiana M. “AI Art and Its Implications Current and Future Artists.” Bachelor’s senior project, Purchase College, State University of New York, 2023.

  6. de Corte, Kaat, John Cairns, and Richard Grieve. “Stated versus Revealed Preferences: An Approach to Reduce Bias.” Health Economics 30, no. 5 (May 2021): 1095–123.

  7. Craig, Ashley C., Ellen Garbarino, Stephanie A. Heger, and Robert Slonim. “Waiting to Give: Stated and Revealed Preferences.” Management Science 63, no. 11 (November 2017): 3672–90.

  8. Samuelson, Pamela. “Generative AI Meets Copyright.” Science 381, no. 6654 (July 2023): 158–61.

  9. Samuelson, Pamela. “Thinking about Possible Remedies in the Generative AI Copyright Cases.” Communications of the ACM (forthcoming).

  10. Patel, Jay M. “Introduction to Common Crawl Datasets.” In Getting Structured Data from the Internet: Running Web Crawlers/Scrapers on a Big Data Production Scale, 277–324. Berkeley, CA: Apress, 2020.

  11. Balter, Ben. “Open Source License Usage on GitHub.com.” GitHub (blog), December 20, 2021. https://github.blog/2015-03-09-open-source-license-usage-on-github-com/.

  12. Ramesh, Aditya, Mikhail Pavlov, Gabriel Goh, Scott Gray, Chelsea Voss, Alec Radford, Mark Chen, and Ilya Sutskever. “Zero-Shot Text-to-Image Generation.” Proceedings of Machine Learning Research 139 (2021): 8821–31.

  13. Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. “High-Resolution Image Synthesis with Latent Diffusion Models.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10684–95. New York: Institute of Electrical and Electronics Engineers, 2022.

  14. “Midjourney.” Midjourney. Accessed June 3, 2024. https://www.midjourney.com/home.

  15. Chen, Mark, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, et al. “Evaluating Large Language Models Trained on Code.” Preprint, submitted July 7, 2021. https://arxiv.org/abs/2107.03374.

  16. Gershgorn, Dave. “GitHub and OpenAI Launch a New AI Tool that Generates Its Own Code.” The Verge, June 29, 2021. https:// www.theverge.com/2021/6/29/22555777/github-openai-ai-tool-autocomplete-code.

  17. Rombach, Robin, Andreas Blattmann, Dominik Lorenz, Patrick Esser, and Björn Ommer. “High-Resolution Image Synthesis with Latent Diffusion Models.” Preprint, submitted December 20, 2021. https://arxiv.org/abs/2112.10752.

  18. Ho, Jonathan, Ajay Jain, and Pieter Abbeel. “Denoising Diffusion Probabilistic Models.” Preprint, submitted June 19, 2020. https://arxiv.org/abs/2006.11239.

  19. Zhang, Chenshuang, Chaoning Zhang, Mengchun Zhang, and In So Kweon. “Text-to-Image Diffusion Models in Generative AI: A Survey.” Preprint, submitted March 14, 2023. https://arxiv.org/abs/2303.07909.

  20. Belrose, Nora, David Schneider-Joseph, Shauli Ravfogel, Ryan Cotterell, Edward Raff, and Stella Biderman. “LEACE: Perfect Linear Concept Erasure in Closed Form.” In Advances in Neural Information Processing Systems, edited by A. Oh, T. Naumann, A. Globerson, K. Saenko, M. Hardt, and S. Levine, vol. 36. La Jolla, CA: Neural Information Processing Systems Foundation, 2024.

  21. Lu, Shilin, Zilan Wang, Leyang Li, Yanzhu Liu, and Adams Wai-Kin Kong. “MACE: Mass Concept Erasure in Diffusion Models.” Preprint, submitted March 10, 2024. https://arxiv.org/abs/2403.06135.

  22. European Union. “Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC (General Data Protection Regulation).” EUR-Lex, Document 32016R0679, May 2016. https://eur-lex.europa.eu/eli/reg/2016/679/oj.

  23. Kurmanji, Meghdad, Peter Triantafillou, Jamie Hayes, and Eleni Triantafillou. “Towards Unbounded Machine Unlearning.” Preprint, submitted February 20, 2023. https://arxiv.org/abs/2302.09880.

  24. Hu, Edward J., Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. “LoRA: Low-Rank Adaptation of Large Language Models.” Preprint, submitted June 17, 2021. https://arxiv.org/abs/2106.09685.

  25. Kirillov, Alexander, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, et al. “Segment Anything.” Preprint, submitted April 5, 2023. https://arxiv.org/abs/2304.02643.

  26. Liu, Shilong, Zhaoyang Zeng, Tianhe Ren, Feng Li, Hao Zhang, Jie Yang, Qing Jiang, et al. “Grounding DINO: Marrying DINO with Grounded Pre-training for Open-Set Object Detection.” Preprint, submitted March 9, 2023. https://arxiv.org/abs/2303.05499.

  27. von Platen, Patrick, Suraj Patil, Anton Lozhkov, Pedro Cuenca, Nathan Lambert, Kashif Rasul, Mishig Davaadorj, et al. “Diffusers: State-of-the-Art Diffusion Models.” GitHub, 2022. https://github.com/huggingface/diffusers.

  28. Parmar, Gaurav, Richard Zhang, and Jun-Yan Zhu. “On Aliased Resizing and Surprising Subtleties in GAN Evaluation.” In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11410–20. New York: Institute of Electrical and Electronics Engineers, 2022.

  29. Tsung-Yi Lin, Michael Maire, Serge Belongie, James Hays, Pietro Perona, Deva Ramanan, Piotr Dollár, and C. Lawrence Zitnick. “Microsoft COCO: Common Objects in Context.” In Computer Vision ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V, edited by David Fleet, Tomas Pajdla, Bernt Schiele, and Tinne Tuytelaars, 740–55. Cham, Switzerland: Springer, 2014.

  30. Hasty, Nick, Ihor Kroosh, Dmitry Voitekh, and Dmytro Korduban. “Giphy Celebrity Detector.” GitHub, 2019. https://github.com/ Giphy/celeb-detection-oss.

Comments
0
comment
No comments here
Why not start the discussion?