Beyond Copyright: Tailoring Responses to Generative AI & The Future of Creativity

Derek Slater, Betsy Masiello / Sep 19, 2023

Betsy Masiello and Derek Slater are the founding partners of Proteus Strategies, a boutique tech policy strategy and advocacy firm.

Catherine Breslin / Better Images of AI / Silicon Closeup / CC-BY 4.0

In less than a year, generative AI has transformed notions of what computers might be capable of. Now, we’re in the midst of a heated debate about what role computation will play in the future of art and the creative industries. There is already a lengthy docket of litigation aimed at generative AI companies, and most of these cases are focused on stopping the training of generative AI on copyrighted works without a license.

No doubt, the training of generative AI can implicate copyright, and it’s an important locus of policy debate. Indeed, even at this initial stage of technology development and litigation, copyright is being exhaustively debated.

We don’t focus on that in this article. Many scholars, developers, and others have elaborated on why copyright should permit training of generative AI on copyrighted works without a license, and have responded to common misconceptions about copyright – such as the notion that copying, or making use of, a copyrighted work without permission is per se infringement. We find these proponents’ arguments to be quite compelling, but we acknowledge that others view the matter differently, and one cannot make a fully categorical argument about how existing law applies in all cases (even if one just focuses on a single jurisdiction). In any case, while we link to those arguments above, we won’t delve deeply into them here.

Instead, our goal in this brief piece is to engage with concerns about generative AI’s impact on creativity without being bound by the contours of copyright. Copyright is too often treated as a ‘hammer’ for many different ‘nails’ – the singular instrument for addressing a variety of different economic, cultural, and other concerns. As a result, the current debate can obstruct alternative ways of thinking that may be more suitable or practical.

We focus here on three particular concerns with the training of generative AI, and highlight alternative measures to copyright that can help address them. We admittedly will simplify aspects of the debate and ignore others entirely in order to help broaden (but not resolve) the frame for envisioning solutions. Our hope is that this approach to an incredibly complex, fast-moving set of questions may point more clearly toward constructive paths forward.

Concern #1: Generative AI Is Unfair Because It Uses Content Without Permission

Many stakeholders feel that training generative AI on existing works without permission is per se wrong. While this has been framed as a copyright issue, it is not only that. Adobe’s generative AI system was trained on content with a clear license from the copyright holders, but creators have still objected that they didn’t anticipate this specific use when they agreed to the license. Even fan fiction authors – who often build on copyrighted works without permission – have raised concerns about generative AI trained on their works.

This speaks to a feeling that training AI on a work breaks an implicit social contract, if not the law. One way this is sometimes framed is "it's not just about copyright, it's about consent."

On the one hand, this framing doesn't help resolve the debate – it just shifts the terms. Debating the bounds of copyright means addressing whether and to what extent rights holders can demand consent for certain uses under the law, and there are many uses for which copyright does not typically require consent (e.g, reading a book, making a parody). Invoking consent by itself does not determine whether and how to sustain such uses or draw different lines, whether via copyright, other rights, or other areas of the law. For instance, the idea of crafting new "data rights'' related to AI training still requires reckoning with trade-offs, including how such requirements might impede other creators and people who benefit from generative AI tools.

On the other hand, the broader framing around consent opens the door to other types of mechanisms that might help address the underlying concern. Norms and technical standards can also help people define, signal, and respect each other's preferences. These mechanisms still come with tradeoffs, but norms and standards may be able to evolve with and be tailored to different uses and circumstances in ways that law is not suited.

It can be easy to overlook the many ways in which creative endeavors and industries are regulated today as much through ‘copy-norms’ as by formal rights. For instance, norms around attribution and plagiarism play a critical role for a range of everyday creators and innovators, even where the law does not necessarily require it. Fashion and cuisine (that is, food recipes and restaurants) have thrived in an environment where lots of copying is permitted under the law; at the same time, norms still can shape behavior in these areas, even if they are contested and continue to evolve over time.

One particularly pertinent institutionalized norm in the context of AI is the robots.txt standard, which is widely used by website operators to communicate whether and how they want their sites to be accessed (‘crawled’) by automated mechanisms. While not instantiated by or explicitly required by law, commercial search engines and others broadly comply with the standard. OpenAI has recently explained how robots.txt can be used to disallow its web crawler, GPTBot, to access content for AI training purposes. Spawning.ai is developing a standard specifically aimed at signaling whether a site accedes to AI training, and a group of publishers within the World Wide Web Consortium (W3C) is working along similar lines.

More generally, generative AI companies are taking steps to provide creators with more choice and control. Spawning.ai is also building tools for creators and rights holders to signal their preferences, upload that content to a database, and then make those signals available to third parties, and StabilityAI has already incorporated these signals in training its Stable Diffusion image generation tool. Meanwhile, Google kicked off an effort to investigate robots.txt-like solutions. Relatedly, OpenAI announced that it will take steps to limit generating content in the style of a living creator, and Microsoft provides a way for creators to limit such outputs through its Image Creator with Bing tool.

Meanwhile, communities of creators and fans are also developing norms appropriate for their own contexts. For instance, Kickstarter will be asking developers of AI tools to describe how they manage consent for use of content. That way, users of the site can factor that information when they decide whether or not to back a project.

Again, the emergence of new norms and supporting tools does not mean that everyone will be satisfied or that we can avoid difficult tradeoffs. But they can provide a practical path forward for reconciling different interests.

Concern #2: Generative AI Will Replace Creative Jobs

Copyright is justified as a means to incentivize creativity, giving creators certain exclusive rights over their works. When someone else produces a creative work that is substantially similar to another in order to compete in the market with that preexisting work, it can infringe those rights. This is possible regardless of the medium and remains a concern with how generative AI may be used. A user may prompt an AI tool to help them generate an infringing work, and it's possible that AI systems can "memorize" and reconstitute particular copyrighted elements from their training set in a generated output. Consistent with what we said at the outset, for purposes of this essay we leave to the side a discussion of how the possibility for infringing outputs might give rise to copyright liability for the developer who trained the generative AI on copyrighted works (although we think the instances in which such liability exists are much more limited than many suggest, at least in the U.S.).

But concerns about generative AI often take a more general frame than looking at the implications of specific outputs that are similar to and actually infringe on others’ works. Instead, they look at protecting creators’ livelihoods overall, and how even non-infringing outputs may compete with the people who generated the training data. For instance, illustrators and web designers will face new competition from people who, thanks to new technology, can enter those professions with greater ease. The concern here is not that an illustrator will literally copy from other creators, but they will create new illustrations that still compete with and threaten the livelihoods of those existing creators.

Even if you believe generative AI will create untold economic benefits in aggregate, there are no guarantees about how those benefits will be distributed. The oft-mentioned response that historically technological progress doesn’t destroy jobs in aggregate fails to address the individual or emotional consequences of the earth shifting beneath one’s feet. In some sense, this is a rehashing of the early-2000’s concern workers had about training their offshored replacements. At that time, various policy responses were considered to blunt the effects of offshoring on displaced workers, and a similar lens may be appropriate for the effects of generative AI on creative industries.

The recent strike by the Writers Guild of America is instructive here. To be sure, some writers oppose use of generative AI, full stop. But the negotiating position of the Guild is more nuanced, focusing particularly on how the technology is introduced and used in the specific context of a writer’s relationship with a studio. The existing labor agreements for these workers provide different benefits to the person who writes an initial draft versus the person who edits and refines it, with the latter entitled to less compensation. The Guild is concerned that if generative AI is used to draft material under the current agreements, Guild members will not see any economic benefits from the use of this technology, only the studios will. In other words, their policy response is focused on the distribution of technology’s benefits, as opposed to limiting the technology itself. The studios appear ready to work towards provisions along these lines.

Finding this type of delicate balance between protecting creators’ economic interests and embracing the creative potential of generative AI is a productive way to reconcile competing interests. There are a range of policy vectors that similarly warrant consideration. For example, today’s tax policy has the effect of preferencing business investments in capital as opposed to hiring labor. And of course there will be opportunities to upskill and retrain workers displaced by generative AI, and we could create incentives that motivated companies to lead the way on the effort.

These issues also point to a growing chasm between the social safety net we may need in the face of AI, and the one we presently have. It is likely not a coincidence that Sam Altman, CEO of OpenAI, has for many years advocated for Universal Basic Income. An entire economic policy agenda for raising the floor of social and economic protections, and how to finance those, is probably needed.

Finally, norms could also play a role in stabilizing market opportunities for income. Seven of the leading AI companies have already voluntarily committed to labeling AI-generated content, which points to a market solution for preserving creators’ economic opportunities. Just as there remain markets for handcrafted goods even in the face of mass-produced goods of equal or better quality, it is possible to imagine a future where entirely human-generated creative works enjoy a market that is distinct from the market for AI-generated creative works. Efforts like watermarking may facilitate differentiation between the types of creative work available, and allow market actors to express and act on their preferences accordingly.

Concern #3: Generative AI Will Lead to Cultural Deterioration

Similar to the anxiety that generative AI will destroy jobs, there are some who worry it will lead to cultural deterioration. A letter signed by hundreds of creators calling for publishers to support “editorial art made by humans, not server farms” puts it this way:

AI purports to have the capability to create art, but it will never be able to do so satisfactorily because its algorithms can only create variations of art that already exists. It creates only ersatz versions of illustrations having no actual insight, wit, or originality…. Over time, this will impoverish our visual culture. Consumers will be trained to accept this art-looking art, but the ingenuity, the personal vision, the individual sensibility, the humanity will be missing.

In this sweeping form, the claim is unfounded, or at best highly subjective and suspect. After all, even a cursory review of people producing art with generative AI reveals countless examples that by any reasonable judgment have “insight, wit, or originality.” All creativity builds on the past at some level, a "variation" on what has come before. Moreover, history is also instructive; photography, synthesizers and other tools for creating art have faced the complaint that it is somehow debasing the very nature of art itself, but eventually became highly regarded parts of culture.

There is a narrower form of this argument that is worth further attention, however. Underscoring the fear in the letter above that “consumers will be trained to accept this art-looking art” is the notion that these works will effectively crowd out mass market appetite – and thus commercial prospects – for many other forms of artistic expression. This concern echoes fears about distribution of the benefits from generative AI, implicitly suggesting that a few powerful companies will use generative AI to drive commoditized, commercial works that dominate consumers’ attention.

Here, it’s worth reflecting on whether and how generative AI is of unique concern. After all, there is already significant concentration among media entities (e.g. record companies, movie studios, book publishers). Culture is already awash in franchise movies and other media, featuring sequels, prequels, and spin-offs of existing mass cultural objects. For instance, the ink was barely dry on the smash hit Barbie movie before the company Mattel began working on movies around many of its toys and games, including the Magic 8-Ball toy and the card game Uno. Pop music is increasingly engineered simply for the purpose of generating a hit, and as such songs are sounding more and more alike.

We don’t mean to debate the merits of these and other developments in media and art. Rather, our point is simply that concerns about generative AI must be situated alongside the long-running debate about consequences of commercial interests and industry concentration in art and media. It would be prudent to look at how to address these issues in the round, rather than singling out generative AI.

Policies might focus here on fostering competition and ways to address media diversity, including means to directly support small, independent, and nonprofit enterprises, for example. In addition, while it would be a strange thing to legislate taste, to restrict the production of certain allegedly ‘low’ art, norms and community can be more appropriate tools. As noted above, one can imagine efforts to encourage people to think about the origins of the art they enjoy and purchase.

Looking Ahead

As we said at the outset, we are doubtful that copyright is or should be the tool to address the many complex concerns around generative AI training. Some will disagree, and litigation around these issues will press ahead, but we believe there are more viable paths forward, worthy of more of our collective attention. First, stakeholders can work together on clearer norms and tools that provide creators with ways of expressing their preferences over how their works are used by generative AI. Second, generative AI presents an opportunity for modernizing the social and economic policies that undergird creative industries and the livelihoods of artists.


Derek Slater
Derek Slater is a tech policy strategist focused on media, communications, and information policy, and is the co-founder of consulting firm Proteus Strategies. Previously, he helped build Google’s public policy team from 2007-2022, serving as the Global Director of Information Policy during the last...
Betsy Masiello
Betsy Masiello is a tech policy strategist with experience in a wide range of industries, including tech, media and telecoms, public health and urban transportation, and is the co-founder of consulting firm Proteus Strategies. Previously, she was VP of Public Policy at Juul Labs, Senior Director of ...