Home

Donate
Analysis

August & September 2025 Tech Litigation Roundup

Madeline Batt, Melodi Dinçer / Oct 10, 2025

Madeline Batt is the Legal Fellow for the Tech Justice Law Project. Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.

August & September landscape: Developments in lawsuits against big tech, new complaints against chatbots, and California secures legal victories in federal and state courts.

The Tech Litigation Roundup gathers and briefly analyzes notable lawsuits and court decisions across a variety of tech-and-law issues. This month’s roundup covers updates in the following cases (use the links for each case to go to the relevant section): 

  • United States v. Google LLC – A district court ordered minimal remedies for Google’s violations of antitrust law related to its search engine and search advertising.
  • European Commission v. Google LLC – The European Commission ordered Google to pay €2.95 billion and end self-preferencing practices in an antitrust case focused on Google’s adtech.
  • United States v. Google LLC – A district court heard oral arguments on remedies for Google’s violations of antitrust law related to its adtech business.
  • Raine v. OpenAI, Inc. – Parents sued OpenAI, alleging that its ChatGPT product caused their 16-year-old son’s suicide.
  • Montoya v. Character Technologies, Inc. – Parents sued Character Technologies and Google, alleging that the Character.AI app caused their 13-year-old daughter’s suicide.
  • P.J. v. Character Technologies, Inc. – A mother sued Character Technologies and Google, alleging that the Character.AI app caused her then 14-year-old daughter’s suicide attempt.
  • E.S. v. Character Technologies, Inc. – Parents sued Character Technologies and Google, alleging that the Character.AI app sexually abused their 13-year-old daughter.
  • NetChoice, LLC v. Bonta – The Ninth Circuit affirmed the district court’s decision that California’s Protecting Our Kids from Social Media Addiction Act can go into effect while NetChoice’s constitutional challenge moves through the courts.
  • Social Media Cases – A California state court admitted expert testimony concerning social media addiction in a case alleging that several social media companies designed their platforms to drive engagement from kids.
  • C.H. v. Google, LLC – Google settled a class action lawsuit alleging that it collected children’s personal data via YouTube without parental consent and used it for targeted advertising.
  • Baig v. Meta Platforms, Inc. – The former head of security at WhatsApp filed a whistleblower action alleging that Meta retaliated against him for his efforts to expose widespread cybersecurity failures on the messaging app.
  • FTC v. Amazon, Inc. – Amazon paid $2.5 billion to settle FTC claims that it deceived consumers into joining its paid Prime membership program and then prevented them from cancelling.

Google avoids divestiture in US antitrust case, faces fine in EU

United States v. Google LLC (D.D.C. Case No. 20-cv-3010 (APM))
United States v. Google LLC (E.D. Va. Case No. 1:23-cv-00108-LMB-JFA)
European Commission v. Google LLC

After last year’s decision in United States v. Google that Google’s search engine and search advertising violated US antitrust law, the key question remained: how to address Google’s illegal monopoly. Although the government urged the court to require Google to divest parts of its business, the court adopted more modest remedies. These included ending exclusive distribution agreements for certain Google products and requiring Google to share some of its search index and user-interaction data with competitors.

Neither measure is expected to significantly change Google’s market dominance. The court declined to order any structural remedies that could have better “broken up” the monopolist’s hold on search, including the divestiture of Google’s browser Chrome, as the US government and nearly every state Attorney General had urged. It even permitted Google to continue paying to be the default search engine across devices and browsers, despite the fact that Google used this tactic to build its illegal monopolies in the first place.

Judge Mehta cited the generative AI revolution to explain his cautious approach, suggesting that Google is facing a newly competitive landscape due to products like ChatGPT. But Kate Brennan of the AI Now Institute was critical of this explanation, noting that Google is actually uniquely positioned to profit from generative AI. Unlike the as-yet-unprofitable start-ups that Judge Mehta suggests are its new competitors, Google owns a browser, an operating system, and devices where it can deploy and monetize AI unilaterally. Judge Mehta’s decision did not touch this legacy infrastructure through which Google can consolidate even more power over genAI.

In addition to the search case, Google was found liable for anticompetitive conduct involving its adtech business back in April. In September, a federal judge in Virginia heard arguments to determine appropriate remedies. The case provides US enforcers another chance to convince the courts to adopt stronger measures to address Google’s monopoly power in the adtech market.

Meanwhile, across the Atlantic, the European Commission imposed a fine of almost 3 billion euros on Google in a separate antitrust case, launched in 2021. This case examined whether Google violated European competition law by favoring its own open display advertising technology over competitors’. In a June 2023 statement of objections, the Commission found that Google was unfairly favoring its ad exchange AdX and indicated that “mandatory divestment by Google of part of its services” would likely be necessary.

For now, though, Google has avoided that outcome. In addition to the fine, the Commission ordered Google to stop self-preferencing its ad exchange and issued the company a 60-day deadline to come up with a solution to its conflict of interest. The measures were announced in the context of increasing foreign policy tensions around enforcement, as the Trump Administration has threatened tariffs against countries that seek to enforce rules or impose fines on US tech companies. Reports suggested that the announcement of the fine was delayed due to concerns about possible US retaliation, reflecting the extent to which companies like Google now factor into geopolitical decisions.

Lawsuits against ChatGPT and other chatbot products are filed

Raine v. OpenAI, Inc. (California Superior Court, San Francisco County Case No. CGC25628528)
Montoya v. Character Technologies, Inc. (D. Colo. Case No. 1:25-cv-02907)
P.J. v. Character Technologies, Inc. (N.D.N.Y. Case No. 1:25-cv-01295)
E.S. v. Character Technologies, Inc. (D. Colo. Case No. 1:25-cv-02906-NRN)

In August, parents filed suit against OpenAI in California state court, alleging that ChatGPT caused their son’s suicide. According to the complaint, 16-year-old Adam Raine had begun using ChatGPT for homework help, but its anthropomorphic and engagement-maximizing features led him to spend hours per day sending personal messages to the AI product. The filing states that after his messages to the chatbot broached mental health topics, it encouraged Adam to view suicide as a legitimate and even brave option, ultimately coaching him through multiple suicide attempts. The complaint further alleges that the product instructed Adam to keep his struggles secret when he was considering asking family for help and even provided the detailed instructions that Adam followed on the night he died.

The complaint was filed by Tech Justice Law Project (TJLP) and Edelson PC on behalf of Adam’s parents. It argues that ChatGPT is a defective product, and alleges that its addictive and deceptively human-like design, coupled with insufficient safety features, poses an unacceptably high risk to consumers like Adam. It also argues that OpenAI was negligent and acted in violation of California’s Unfair Competition Law by putting a product that poses such a high risk of harm on the market. (Note to readers: TJLP is not counsel-of-record or further involved in this matter.)

Past chatbot cases (such as Garcia v. Character Technologiesand A.F. v. Character Technologies, both brought by TJLP and the Social Media Victims Law Center) have focused on technologies marketed as AI “companions”. The Raine case is the first to highlight that the dangers associated with these role-playing products are equally present with general-purpose products like ChatGPT.

Additional cases on behalf of children harmed by chatbots were also filed recently. In September, the Social Media Victims Law Center filed three new lawsuits in federal district court against the makers of the companion chatbot app Character.AI. Montoya v. Character Technologies concerns a 13-year-old victim of suicide who died after allegedly becoming addicted to Character.AI. According to the complaint, she repeatedly wrote in a journal the same haunting mantra, “I will shift,” a reference to the idea that humans can “shift” consciousness to join chatbot personas in a virtual reality.

The other two lawsuits are filed on behalf of survivors: one involving a then-14-year-old who attempted suicide after her parents blocked her access to Character.AI (P.J. v. Character Technologies) and another involving a 13-year-old who allegedly received sexually explicit messages from the app (T.S. v. Character Technologies). All three cases assert claims for strict product liability, negligence, and other torts, including intentional infliction of emotional distress. They also bring claims under New York and Colorado statutes governing deceptive business practices.

California secures legal victories in federal and state courts

NetChoice, LLC v. Bonta (9th Cir. No. 25-146)
Social Media Cases (California Superior Court, Los Angeles County, Case No. JCCP5255)

California has been active among US states in regulating tech to protect minors, including passing the Protecting Our Kids from Social Media Addiction Act in 2024 to address the negative health impacts of social media. In NetChoice LLC v. Bonta, tech industry association NetChoice sought a preliminary injunction barring the legislation from going into effect on First Amendment grounds.

In September, the Ninth Circuit largely affirmed the district court’s denial of NetChoice’s request for a preliminary injunction. (TJLP filed an amicus curiae brief supporting the legislation.) The panel held that NetChoice failed to show that California’s ban on personalized feed algorithms for minors was facially unconstitutional. NetChoice’s as-applied challenge also failed: while the panel acknowledged that some personalized feed algorithms may be expressive speech, it reasoned that the district court did not abuse its discretion by concluding that NetChoice lacked standing to bring such fact-specific challenges without the participation of its member companies and specific information about their algorithms. The Ninth Circuit also agreed that requiring social media companies to default minors into private accounts was not content-based speech regulation and survived intermediate scrutiny, that NetChoice’s challenges to age-verification measures scheduled to begin in 2027 were unripe, and that the phrase “addictive feed” is neither unconstitutionally vague nor “pejorative,” as NetChoice tried to claim.

The Ninth Circuit did diverge from the district court decision on the regulation of like counts, however. The Act requires social media companies to ensure that minors’ accounts, by default, do not show them the number of likes, shares, or other feedback that a post has received. While the district court declined to enjoin this provision, the Ninth Circuit found that the provision was content-based and therefore likely unconstitutional. Accordingly, it directed the district court to enjoin only that provision, leaving the rest of the Act intact.

The ruling is a significant win for design-based approaches to regulation. Such approaches seek to address tech-related harms by restricting certain product features (like addictive feeds) at the design stage, rather than relying on reactive measures like content moderation. The Ninth Circuit’s decision upholding California’s law suggests that similar approaches may be viable in other jurisdictions.

Over in California state court, a trial judge allowed expert testimony in coordinated personal injury litigation that concerns youth social media addiction. The cases represent minor users of social media platforms (or parents of those users) who allege they suffered various types of harm as a result of using Facebook, Instagram, Snapchat, TikTok, and YouTube. The social media defendants tried to block the plaintiffs from having experts in psychiatry, neuroscience, pediatrics, and media psychology testify about the potential links between social media usage by minors and adverse health outcomes. The judge rejected the defendants’ attempt, however, finding in part that the experts’ conclusions were backed by peer-reviewed studies and even the companies’ own internal documents. The trial for this case is set for November 2025, and it will be the first time a jury hears evidence about how these social media companies built their platforms to drive up user engagement–and how that choice may have harmed young people in particular.

Other updates

C.H. v. Google, LLC (N.D. Cal. Case No. 19-07016)
Baig v. Meta Platforms, Inc. (N.D. Cal. Case No. 3:25-cv-7604)
FTC v. Amazon, Inc. (W.D. Wash. 2:23-cv-0932)

Google settles case on kids’ data privacy

In August, Google agreed to pay $30 million to settle a federal class action lawsuit, C.H. v. Google, that alleged violations of children’s data privacy. The plaintiffs, including parents and minors, accused Google of violating dozens of state laws by using cartoons and other kid-friendly content on YouTube to lure children in and collect their personal information without parental consent. Google then allegedly used kids’ data to send targeted ads. This agreement follows a 2019 settlement of similar charges by the US Federal Trade Commission (FTC) and New York Attorney General Letitia James, which required Google to pay $170 million in fines and change some of its data practices.

Whistleblower sues Meta over WhatsApp security issues

As one data privacy-related suit concluded for Google, another was filed against Meta. In September, former WhatsApp head of security Attaullah Baig sued Meta (which owns WhatsApp) and several executives, alleging violations of the Sarbanes-Oxley Act, which includes protection for whistleblowers. Baig’s complaint details major data privacy failures at WhatsApp that ran afoul of a 2020 FTC privacy order and, in one case, California and EU data privacy law. According to the complaint, when Baig raised these and other data security and regulatory concerns, he faced escalating retaliation and was eventually terminated.

Amazon Prime settles FTC Case

Later in September, Amazon reached a $2.5 billion settlement with the FTC over allegations that it had misled consumers into joining its Prime membership program and then prevented them from cancelling their enrollment. The FTC had alleged that Amazon tricked consumers into joining Prime by creating confusing and deceptive interfaces that led them to sign up for memberships without their knowledge, then created a complex cancellation process that was purposefully designed to prevent people from ending their Prime memberships. According to the terms of the settlement, Amazon paid a $1 billion civil penalty and $1.5 billion in restitution to consumers. It also agreed to changes to its Prime enrollment and cancellation process, including clearly disclosing all terms of Prime membership and making it possible to cancel Prime by the same method consumers use to sign up.

Authors

Madeline Batt
Madeline Batt (she/her) is the 2025-26 Legal Fellow at Tech Justice Law Project. She approaches tech accountability from a background in civil rights and immigrant justice movement lawyering. She has experience leveraging litigation and advocacy to resist the use of technology to surveil and disempo...
Melodi Dinçer
Melodi (she/her/ella) is Policy Counsel for the Tech Justice Law Project. She is a tech justice lawyer with expertise in data privacy, “AI” policy, and biometric surveillance. Her critical approach explores how legal and political institutions enable corporate technologies to target marginalized com...

Related

Analysis
July 2025 Tech Litigation RoundupAugust 8, 2025

Topics