November 2025 Tech Litigation Roundup
Madeline Batt, Melodi Dinçer / Dec 10, 2025Madeline Batt is the Legal Fellow for the Tech Justice Law Project. Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.
A Bad Month for Antitrust, Chatbot JCCP Looms, and More in Tech Litigation
The Tech Litigation Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues. This month’s roundup covers the following cases:
- F.T.C. v. Meta (D.D.C. Case No. 20-3590 (JEB)) - Meta won a major victory over FTC antitrust enforcers, successfully arguing that it does not hold a social networking monopoly.
- OpenAI Cases - Seven new product liability actions against OpenAI - Victims and their families sued OpenAI for psychological harm they alleged was caused by ChatGPT, raising the possibility of a chatbot mass tort action.
- TikTok v. Bonta (N.D. Ca. Case No. 3:25-cv-09789), Meta v. Bonta (N.D. Ca. Case No. 3:25-cv-09792), and Google v. Bonta (N.D. Ca. Case No. 5:25-cv-09795) - Major tech companies revived the challenge to California’s Protecting Our Kids from Social Media Addiction Act in three separate lawsuits.
- Worker Info Exchange letter - The non-profit is gearing up to bring a collective action against Uber for GDPR violations associated with using AI to set wages.
- Amazon.com Services v. Perplexity AI (N.D. Ca. Case No. 3:25-cv-09514-MMC) - Amazon sued Perplexity for using AI agents disguised as human users to access its web store.
- Thele v. Google (N.D. Ca. Case No. 5:25-cv-09704-NC) - A putative class action alleges that Google secretly changed users’ settings to allow Gemini AI to read private communications without notice.
Related litigation is linked throughout the Roundup.
TJLP would love to hear from you on how this roundup could be most helpful in your work – please contact us with your thoughts.
Meta’s big win marks an uncertain month for tech antitrust enforcement
Meta won a significant victory over the FTC in a years-long antitrust case, leaving some commentators uncertain about the future of antitrust law as a tool for tech accountability. The FTC sued Meta in 2020, arguing that Meta had monopolized the personal social networking industry. The agency alleged that Meta had engaged in a "buy or bury" scheme by acquiring start-ups like WhatsApp and Instagram that might have otherwise competed with Facebook, resulting in a dominant social networking platform that generated substantial profits even as user experience declined.
Judge Boasberg rejected the FTC's claims. Citing the evolving functions of Meta's apps over time, he concluded that Meta today competes with both YouTube and TikTok in a social media market shaped by short-form videos. His emphasis on changes in the social media landscape since the lawsuit was filed in 2020 echoed many of the reasons behind August's corporate-friendly remedies decision in US v. Google (discussed in a prior roundup).
For proponents of antitrust enforcement against big tech, Meta's win was another disappointment, where courts again suggested that rapid technological developments render antitrust claims obsolete by the time they are fully litigated. Reactions to the decision raised broader concerns about the future of antitrust enforcement against big tech, with many concluding that there is a fundamental mismatch between the pace of antitrust litigation and the speed of tech product development. The pessimism only worsened with comments this month from the judge overseeing a remedies trial in an antitrust case targeting Google’s adtech, who signaled skepticism about whether forcing Google to sell its ad exchange is an appropriate remedy.
Meanwhile, as conversations in the wake of the Meta decision questioned whether antitrust can restrain the world’s largest tech companies, the startup RealPage also appeared to avoid significant consequences in its antitrust case—and then went on the offensive. RealPage develops AI software that critics allege facilitates anti-competitive price-fixing by landlords. This month, the company agreed to settle an antitrust lawsuit filed by the Department of Justice, in a deal that allows the company to continue generating recommendations for landlords, so long as they do not use nonpublic, “competitively sensitive” data for recommending rent prices. This and other behavioral changes included in the settlement have not reassured RealPage’s detractors.
Shortly after announcing the seemingly favorable settlement, RealPage filed a First Amendment lawsuit challenging a New York law that would prohibit the use of AI to coordinate home rental prices. The company cited the settlement in its complaint to argue that its algorithmically generated pricing recommendations are constitutionally protected speech, not antitrust violations.
Prominent US antitrust cases against big tech are still ongoing, including cases against Google (in its remedies phase), Amazon, and Apple. Whether these decisions will affirm the chorus of doubts about the viability of antitrust enforcement against tech companies remains to be seen.
ChatGPT victims and families file seven new cases against OpenAI, move for JCCP
Tech Justice Law Project and Social Media Victims Law Center filed seven new complaints against OpenAI in California state court, alleging severe psychological harm caused by ChatGPT 4o. The plaintiffs have since moved to litigate the cases as a Judicial Council Coordination Proceedings (JCCP) action–a mechanism for mass torts available in California courts. If successful, the motion would allow these and future ChatGPT cases to proceed in a coordinated manner, similar to the existing JCCP for social media addiction litigation.
The victims in the lawsuits range in age from 17 to 48, and all either died by suicide or suffered delusional disorders (commonly referred to as "AI psychosis") after extensive use of ChatGPT 4o. As in the case of Adam Raine, the lawsuits allege that the chatbot product acted as a "suicide coach" for several victims, providing encouragement and then detailed instructions up until the moment of their deaths. Survivors represented in the lawsuits also suffered intense psychological harm, including involuntary commitment and falling deeply into debt.
Notably, several of the complaints allege that the victims had been using ChatGPT safely prior to the rollout of 4o, which OpenAI has admitted exhibits higher levels of sycophancy than prior models. The complaints link the plaintiffs' alleged psychological injury to a lack of meaningful safety testing prior to 4o's reportedly rushed launch, as well as OpenAI’s choices to reduce protections against suicide-related content. OpenAI initially made changes to limit 4o's sycophancy after seeing the results post-launch, but reversed course after a customer outcry.
The lawsuits assert a range of state law claims. Zane Shamblin's parents are the first to sue OpenAI under a manslaughter theory, arguing that Sam Altman and the company caused their son’s death via criminal negligence, making the defendants per se negligent. Other causes of action pursued in the lawsuits include aiding and encouraging suicide, strict products liability, negligence, wrongful death, and violation of California's prohibition on practicing psychotherapy without a license.
This month, OpenAI also filed an answer in an earlier lawsuit by the parents of 16-year-old Adam Raine, who died by suicide after extensive use of ChatGPT. The filing offers a glimpse into how the company will likely respond to these new, similar claims. (TJLP co-filed the complaint in Adam’s case but has no further involvement in the action.) OpenAI argued that "to the extent that any 'cause' can be attributed" to Adam Raine's death by suicide, it was his "misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT." The company accused the 16-year-old of violating its terms and conditions by speaking to the chatbot product about his suicidal ideation, suggesting that this absolves them from liability for his death.
Social media litigation moves forward
November saw significant developments in litigation related to social media. In California, tech companies YouTube (through its parent company, Google), Meta, and TikTok responded to industry association NetChoice's failed First Amendment challenge to the Protecting Our Kids from Social Media Addiction Act by filing their own lawsuits directly. The Act requires social media platforms accessible to California children to comply with certain anti-addiction measures, including prohibiting access to “addictive feeds” (e.g., feeds using personalized algorithms and infinite scroll). NetChoice previously sought to strike down the law, arguing that its provisions violate the First Amendment to the US Constitution (TJLP filed an amicus brief opposing NetChoice's position). The Ninth Circuit largely rejected NetChoice's challenge, holding NetChoice did not have standing to bring those kinds of claims, but left open the possibility that certain fact-specific applications of the Act could violate the First Amendment.
These new lawsuits filed by the tech companies challenging the same law appear to directly respond to the court’s finding. These companies are more likely than NetChoice to reach the merits of their as-applied challenges to California’s law. Because of their reach, even a narrower ruling that the law is unconstitutional only as applied to their activities would still stymie California regulators' ability to limit companies’ addictive design choices, like personalized feeds and infinite scroll. To prevail, though, the companies will need to show that their specific targeting algorithms are a form of constitutionally-protected expression. Given the Ninth Circuit’s statements in NetChoice v. Bonta that algorithms are unlikely to be expressive if they merely display content based on that user’s online activity, the companies face an uphill battle.
As the fight over California’s law enters a new phase, Virginia faces a fresh legal challenge to its recently enacted social media regulations. The Virginia law limits children under 16 years old to one hour of social media access per day, with parental consent required to increase the daily limit. NetChoice has sued to enjoin the restriction, arguing that the law violates the First Amendment.
Litigation targeting the design of social media platforms also advanced this month. A court filing in the multidistrict litigation on adolescent social media addiction revealed previously undisclosed information about Meta’s internal knowledge of its products’ harms. The filing further alleged that the company made repeated decisions to terminate safety research and drop protective features to maximize engagement.
In a separate platform design case in New Mexico, a judge ordered Meta to produce records related to AI chatbots. Against the backdrop of ongoing discovery disputes over chatbot records (discussed in last month's newsletter and making headlines again in November), the order indicates that companies may be required to produce chatbot-related materials in the course of social media litigation–not only disputes specifically involving AI.
Finally, various parties submitted briefs in the Massachusetts Attorney General's lawsuit against Meta for purposefully designing its platform to addict children. The Electronic Privacy Information Center (EPIC) submitted an amicus brief on behalf of itself, Common Sense Media, Cybersecurity for Democracy, Tech Justice Law Project, and thirteen legal scholars with expertise in digital rights. The brief supported the State’s case in response to Meta’s appeal from a lower court decision, arguing that Section 230 and the First Amendment do not prevent the lawsuit from going forward.
Early legal tests in the EU and the US for emerging applications of AI
The AI boom has featured far-reaching industry claims that AI will soon perform tasks that were previously handled only by humans. This month, those more novel applications of AI began facing early legal tests.
In the EU, the non-profit Worker Info Exchange sent a letter before action to Uber, the first step toward filing a collective legal action against the company. The letter argues that Uber violated the GDPR by using AI to “dynamically” set driver wages (what Zephyr Teachout describes as algorithmic personalized wages) and the "commission" the driver had to pay Uber for each ride.
Before implementing AI-determined pay rates, the letter says, drivers' wages were determined by predictable factors, such as time and distance. Since the change, 82% of drivers earn less per hour and the commission paid to Uber has soared from a flat 25% of the fare to variable rates often exceeding 50%. Worker Info Exchange argues that Uber's system violates the GDPR by engaging in unlawful automated decision-making and by using and transferring drivers' data without their consent. With workers paying close attention to how widespread AI adoption may affect their rights, the legality of AI-determined wages under the GDPR will have significant implications in the EU.
Meanwhile, in the US, a lawsuit between Amazon and Perplexity brings the issue of agentic AI into court. Unlike chatbots, AI agents are designed to act autonomously on a user's behalf, in ways that affect the real world. One commercial application is AI web browsers, which allow users to navigate the internet by communicating with an AI agent rather than browsing themselves. For example, rather than navigating to the Amazon web store, scrolling through product options, and making a purchase, an AI web browser user could tell its AI agent, “Please buy me the cheapest well-reviewed blender on Amazon,” and then confirm the agent’s selection, without ever viewing Amazon’s webpage.
AI startup Perplexity’s agentic AI browser Comet is now the subject of a lawsuit by Amazon for enabling users to do exactly that. Amazon argues that Perplexity’s AI agent is unlawfully accessing and making purchases on Amazon’s web store while disguised as a human user, despite Amazon’s efforts to limit AI agents’ access to the site. Amazon claims that this covert access violates the Computer Fraud and Abuse Act and the related California Comprehensive Computer Data Access and Fraud Act.
Perplexity publicly called Amazon’s lawsuit “a threat to all Internet users,” comparing use of Comet to a “right to hire labor.” Its emphatic response reflects the reality that, if Amazon succeeds in establishing that websites may prohibit access to AI agents, it would deal an existential blow to the growth of agentic AI. Using an “AI assistant” is much less desirable if the agent cannot access portions of the internet it needs to perform tasks. In support of its position, Amazon cites the cybersecurity risks of AI agents, emphasizing that allowing its customers to use Comet for purchases jeopardizes their financial information. Amazon also loses ad revenue if human customers are not accessing the site themselves, a dynamic that may motivate opposition to AI agents from other tech companies whose business models rely on ads to generate revenue.
Lawsuit alleges Google had Gemini access and use people’s communications without consent
A putative class action alleges that Google secretly changed users' privacy settings to allow Gemini AI to read all their emails and messages, without users' knowledge or consent. The complaint asserts that Google's "smart features" setting is described as an opt-in feature, through which users "agree" to let Gemini to review their private communications. Yet, according to the filing, Google turned on smart features by default without any notice to consumers, burying the choice to opt out in convoluted privacy settings menus. As the complaint points out, giving Gemini access to users' emails and messages would allow Google to analyze highly personal information, from interpersonal relationships to political affiliation to financial and health data. This would not be the first time Google faced legal scrutiny for secretly undermining users’ explicit, privacy-protective choices.
The putative class is bringing claims under several California data privacy laws (CIPA and CDAFA), the California constitutional right to privacy, common law intrusion upon seclusion, and the federal Stored Communications Act. They seek damages, including punitive damages, as well as declaratory and injunctive relief. The requested injunction encompasses an order for Google to stop its alleged unlawful tracking as well as the destruction of all data allegedly collected via Gemini’s review of users’ private communications.
The outcome of the lawsuit has implications for the ongoing efforts to monetize AI. While AI companies are currently burning cash, expanded access to more intimate data by AI models would enable even more highly personalized, potentially more profitable targeted advertising. AI models’ access to personal communications is therefore an issue that directly pits user privacy against the financial interests of AI companies.
Authors


