Home

Donate
Perspective

$1.5 Billion Speed Bump: What the Anthropic Settlement Tells Us About AI Accountability

Pete Furlong / Oct 8, 2025

Pete Furlong is a senior policy analyst and the lead policy researcher for the Center for Humane Technology.

PARIS, FRANCE - MAY 22, 2024: Anthropic co-founder and CEO Dario Amodei attends the Viva Technology show at Parc des Expositions Porte de Versailles in Paris, France. (Photo by Chesnot/Getty Images)

At first glance, a $1.5 billion settlement in a book authors’ copyright lawsuit against Anthropic looks like a huge win for copyright holders and a blow to the AI company’s business. Under the settlement, Anthropic will be forced to delete millions of unlawfully captured books, and authors will receive compensation — seemingly resolving the issue in favor of the creators.

But the same week the settlement was first proposed, Anthropic raised $13 billion at a $183 billion valuation. In effect, Anthropic’s penalty for stealing the creative output and economic livelihood of thousands of authors amounted to less than 1 percent of the company’s total value.

From this perspective, the settlement raises as many questions as it resolves. What do genuine consequences look like in an industry where astronomical investment dollars continue to flow? At what point do civil penalties become just another cost of doing business?

Lawsuits are piling up against AI companies, with high-profile suits alleging copyright violations, data privacy issues, and even wrongful death. Some have been successfully litigated, but to date, companies have mostly responded with perfunctory product changes and the occasional large payout.

These settlement amounts can be attention-grabbing, no doubt, and create a sense that accountability has been achieved. But they pale in comparison to AI companies’ astonishing (and growing) scale and influence. Investors know this and see these companies following a familiar tech playbook.

Recall Facebook’s record-breaking $5 billion fine from the FTC in 2019. Facebook’s stock price actually increased after the fine’s announcement, as investors breathed a sigh of relief at what they saw as a manageable penalty, given the fine constituted just one-tenth of the company’s annual revenue. The fine dominated the news cycle for about two weeks, and then Facebook returned, more or less, to business as usual. Today’s AI companies likely view billion-dollar settlements or penalties the same way—as little more than a speed bump on the road to AGI.

In the initial hearing to approve the settlement, Judge William Alsup acknowledged this reality in the Anthropic case. He postponed approval, noting that when Anthropic pays that $1.5 billion settlement, “they’re going to get the relief in the form of a clean bill of health going forward,” and that the company would no longer be “at risk of being sued by somebody else on the very same thing.”

Even the courts recognize that financial fines and settlements are failing to change incentives in the AI industry. And when the leading AI companies aren’t defending themselves against legal claims, they’re going on the offensive, pouring money into political and lobbying arenas. Despite framing themselves as research labs in pursuit of superhuman intelligence, leading AI figures across the board have become overwhelmingly concerned with advocating against any form of meaningful accountability in their industry.

Andreessen Horowitz and OpenAI President Greg Brockman joined forces to found and fund the new “Leading the Future” political action committee, with support from Perlexity AI and Palantir’s Joe Lonsdale. Meanwhile, Meta launched its own California PAC, as well as a nationwide Super PAC, in order to fund “light touch” regulatory approaches and support industry-friendly candidates at the state level. All told, Silicon Valley leaders are putting their lobbying dollars to work to push against the growing consensus that we need common-sense accountability measures for AI harms.

OpenAI and others are also stockpiling lobbyists in California and across the country. The fundamental objective for most of this lobbying is the avoidance of accountability — and the freedom to develop AI products on the industry’s terms, not society’s.

AI companies want a regulatory system where “moving fast and breaking things” is not only accepted but encouraged, and where their relentless pursuit of intimate data and conversation-harvesting is neither questioned nor stopped. This was clear in the lobbying effort around the federal “moratorium” on state AI laws last summer, when AI companies advocated for a “temporary pause” on enforcement of state-level AI regulations for ten years, without a federal plan for regulation in its place. It was evident when Sam Altman sat in front of lawmakers, calling for AI regulations, while lobbying behind the scenes for their demise. And it remains clear in the ongoing efforts to lobby for reduced liability, even as AI companies face continued investigations and litigation for harms.

In this system, high-dollar settlement checks become a smokescreen — a cynical nod toward justice that provides cover for big tech to keep influencing policy behind closed doors and entrench their dominant position.

Without genuine accountability, successful lawsuits against AI companies become Pyrrhic victories. They amount to micro-successes and macro-failures that do nothing to compel AI companies to do better, design more safely, or prioritize the people using their products.

With the Anthropic settlement approved, checks will be cut. There may be one last round of headlines. But the company’s AI products will persist, its underlying business model will remain unchanged, and while Anthropic may pay out $1.5 billion, society will continue to bear the costs.

Authors

Pete Furlong
Pete Furlong is a Senior Policy Analyst at Center for Humane Technology. In this role, he provides the foundational analysis and research that underpins CHT's policy approach and advocates for incentive-shifting policy for the technology ecosystem. Prior to joining CHT, he worked at the Tony Blair I...

Related

News
Denmark Leads EU Push to Copyright Faces in Fight Against DeepfakesOctober 7, 2025
Perspective
National AI Ambitions Need a Data Governance Backbone. RDaF Can Provide It.September 29, 2025

Topics