What the Copyright Case Against Ed Sheeran Can Teach Us About AI

Derek Slater / May 2, 2024

Musician Ed Sheeran performs at Queen Elizabeth II's Platinum Jubilee in June 2022. Picture by Andrew Parsons / No 10 Downing Street CC by 2.0

On April 17, a New York court heard arguments in a case about use of existing music without consent. This could have been a headline about generative AI, but this case was about Ed Sheeran.

The famous musician returned to court to defend against claims that his song, “Thinking Out Loud,” infringes the copyright in Marvin Gaye’s “Let’s Get It On.” After seven years of litigation and Sheeran’s victories before a jury as well as a district court judge, the latter ruling is now on appeal.

The case centers on a tension at the heart of copyright and creativity. The firm Structured Asset Sales (an investment bank that owns partial rights to “Let's Get It On”) claims that Sheeran infringed copyright by unfairly exploiting the song without consent. But Sheeran is accused of merely using a chord progression and harmonic rhythm shared with that song – “basic musical building block[s]” as Judge Stanton called them in his district court ruling. Finding in Sheeran’s favor, he added that “all songs, after all, are made up of the ‘limited number of notes and chords available to composers’ …. to protect their combination would give ‘Let's Get It On’ an impermissible monopoly.”

This isn’t just an issue for music, but all forms of expression. Imagine if everyone needed to get Stephen King’s permission to write a horror novel, or George Lucas’ permission to create a hero’s journey amidst a war set in space. That may be good for King and Lucas, but it’d be a disaster for artists as a whole.

Requiring consent for use of existing material can be harmful for artists and culture, let alone fans and the public at large. That’s why copyright has always allowed certain uses of existing material, including by drawing lines between protectable expression and unprotectable ideas, facts, and other elements. Rightsholders can demand consent for some uses, but they are not allowed to enclose and cut off the building blocks of culture and knowledge.

That basic principle is too often absent from debates about generative AI, which brings up the same sorts of tensions at play in the Sheeran case. Large language models like OpenAI’s GPT-4 and Anthropic’s Claude are built by analyzing vast amounts of text in order to derive the “basic building blocks” of language and facts about the world. Similarly, text-to-music generative AI tools like Suno and Udio are trained by analyzing vast sums of audio in order to identify basic ingredients of music – or perhaps “the letters of the alphabet of music” (to appropriate a term used by Sheeran’s lawyer). And while using AI to create artistic works has drawn incredible attention and controversy, there are myriad other uses of generative AI, including in areas like scientific research, education, and healthcare. Many rightsholders, artists, and others see consent as a trump in evaluating these tools and their use of existing materials in training; “Nobody should be allowed to use your data for free, without your consent,” full stop.

As in Sheeran's case, merely invoking consent in this way does not yield a clear answer – it just shifts the questions. What constitutes “your data” as opposed to the “building blocks” of human creativity and knowledge that anyone can use freely? What parts of a work are protected by copyright, and which are uncopyrightable elements? At what point does requiring consent just devolve into enclosure – taking information that belongs to everyone, and claiming it as your own property?

Consent is an important ethical and legal consideration in the context of generative AI, but so too is enclosure. Defining the rules of the road around generative AI requires an approach that wrestles with those tensions.

Some suggest generative AI trained on existing works without consent is categorically different – it is a new and unexpected use beyond the “social contract,” and it allows new works to be created “at scale” that compete with existing works. These may also be good bumper stickers, but they do not make for useful guides.

After all, particularly when it comes to art and creativity, the unexpected and novel are often good things. Foreclosing competition from anything unexpected may help some existing artists, but it is not in the public’s interest in access to a diverse, thriving artistic culture. And the scale argument can cut both ways as well. The implication is that creators using new tools will give fans options that they like, at scale – good for those creators and those fans. Allowing these uses can thus improve social welfare.

Consider here a different US music copyright case - when Acuff-Rose, the company that owned the rights to Roy Orbison’s famous “Oh, Pretty Woman,” sued members of the rap group 2 Live Crew over their parody “Pretty Woman.” The notion of parody itself was certainly known to Orbison, but this genre of music wasn’t; in fact, at the time of the case, the genre was often derided both for its form and content as a violation of society’s norms. Acuff-Rose argued “Pretty Woman” competed with its works, and it did so “at scale” in the sense that 2 Live Crew sold a quarter million copies in its first year. Yet a unanimous Supreme Court found in 2 Live Crew’s favor, establishing that parodies like theirs could qualify as “fair use” under US law.

More generally, controversies over unanticipated uses of existing works, enabled by new technology at great scale, are not new. Consider photocopiers, home video and audio recording, iPods and personal media storage devices, and search engines – all were broadly permitted, despite opposition from many rightsholders and claims that they would impede existing artists’ ability to make money.

Some people might have drawn different lines in these cases. And by the same token, generative AI will entail some hard line drawing. But rather than simply invoking consent, this entails a deeper examination of values, benefits and harms, grounded in copyright and other relevant frameworks.

Just as much as people consider the legality and ethics of consent in the context of using existing works, we should also consider the legality and ethics of enclosure. This is true whether we’re talking about uses like Ed Sheeran's music, generative AI, or myriad ways people every day rely on using other people’s material to create anew.


Derek Slater
Derek Slater is a tech policy strategist focused on media, communications, and information policy, and is the co-founder of consulting firm Proteus Strategies. Previously, he helped build Google’s public policy team from 2007-2022, serving as the Global Director of Information Policy during the last...