March 2025 Tech Litigation Roundup
Melodi Dinçer / Apr 8, 2025Melodi Dinçer is Policy Counsel for the Tech Justice Law Project.
March’s Legal Landscape: AdTech Cases, International Investigations, and Consequential Privacy Rulings
This roundup gathers and briefly analyzes cases across a variety of legal issues and jurisdictions, including tech industry greenwashing, product recalls on Amazon, X suing nonprofits for ad tech research, X suing Indian officials for regulating content, new lawsuits against generative AI companies, rideshare app products liability theories, and more. The Tech Justice Law Project (TJLP) tracks these and other tech-related cases in the US, federal, state, and international courts in this regularly updated litigation tracker.
If you would like to learn more about new cases and developments directly from the litigators and advocates involved, join TJLP for our new tech litigation webinar series! In April, we will explore the NetChoice v. Bonta cases challenging two California laws that attempt to make social media platforms safer for minors – the Age-Appropriate Design Code and the Protecting Our Kids from Social Media Addiction Act. Please RSVP here to receive further information about the event.
Read on to learn more about March developments in tech litigation.
US developments
A trio of new Amazon cases
Amazon is no stranger to the courthouse, regularly facing lawsuits, including challenges to the company’s monopoly power, environmental impacts, and well-documented labor abuses. That trend continued this month, with two new lawsuits against Amazon and the company bringing its own lawsuit against a federal consumer protection agency.
On March 3, the FTC filed a complaint in federal court alleging Amazon violated several parts of the FTC Act with “Click Profit,” an AI-powered offering that Amazon pushes to consumers by promising them the ability to automatically build profitable e-commerce stores on platforms including Amazon, Walmart, and TikTok. Consumers must pay a minimum of $45,000 for Click Profit to set up e-commerce stores and start sending them the massive profits that result, but after an investigation, the FTC determined that this scheme was a sham. In reality, few Click Profit customers saw any return on their investments, with most of them losing their entire payments and becoming saddled with credit card debt and unsold products. On March 18, a federal court temporarily ordered Click Profit to stop operating and freeze all assets as the case proceeds.
Then, on March 14, another group of Amazon customers filed a class action lawsuit in federal court claiming Amazon misled customers about the damaging environmental impact of its Amazon Basics toilet paper and paper towel products. Despite Amazon’s greenwashing statements concerning its environmentally friendly business practices, including hundreds of millions of dollars spent on marketing, Amazon failed to disclose to customers that its paper products contribute to the continued deforestation of Canada’s boreal forest. The lawsuit highlights these allegedly fraudulent, unfair, and deceptive business practices. The lawsuit argues that Amazon preys on the goodwill of eco-conscious customers, continuing to profit from its environmental destruction while deceiving customers unlawfully.
Also on March 14, Amazon filed its own lawsuit in federal court against the US Consumer Product Safety Commission (CPSC), an independent agency that seeks to protect the public from unreasonably risky consumer products by issuing recalls, evaluating products, and developing safety standards. Four years ago, the CPSC filed a complaint against Amazon alleging it distributed products that could pose a danger to consumers. In July 2024, the CPSC issued a ruling and order finding that Amazon was distributing defective products that were unsafe, and that the company was legally responsible for recalling over 400,000 products that violated federal flammability standards.
Now, Amazon is pushing back against the CPSC, arguing that it is not a distributor of these products and is not responsible for protecting the public from products sold by third parties on its platform. Instead, Amazon brands itself as a mere logistics provider that helps to coordinate the promotion, sales, and delivery of products, and nothing more. This is legally distinct from a distributor, which the Commission’s authority reaches, so Amazon argues its rules do not apply to the company. Further, Amazon also claims that the CPSC’s structure is unconstitutional, attacking the legal foundation for the consumer safety agency – an increasingly common tactic for industries that would prefer to operate without the burdens of safety regulation.
Ad-tech: Two new cases show online advertisers’ power over social media platforms
Over 80% of US consumers know the trade-off at the heart of many digital services today: Sites and platforms are free because your user data is used for targeted ads that make money for companies like Google and Meta. Digital advertising is the lifeblood of the internet and internet-connected apps, and this “ad tech” industry can directly influence the decisions of tech companies, which depend on advertiser money. Developments this month in two lawsuits strike at the heart of this industry, with a suit against Meta for misleading advertisers and another against X for attempting to silence ad-based research and advocacy.
Advertiser’s class-action against Meta
First, on March 13, a federal judge approved a notice plan covering a large, years-long class action lawsuit against Meta brought by advertisers on Meta platforms. In DZ Reserve v. Meta, advertisers claim the company used misleading user data to inflate their ad reach metrics, resulting in advertisers paying a higher price to place ads than they would have otherwise. Meta allegedly inflated the reach of ads on Facebook and Instagram by 200 to 400%, and internal documents produced by Meta confirm that senior executives knew about these false numbers for years but failed to do anything, eventually covering up the issue.
After a federal trial court certified the class of advertisers impacted by Meta’s actions, Meta appealed to the Ninth Circuit. In March 2024, the Ninth Circuit panel upheld the decision, allowing the case to continue. Meta tried to appeal that decision to the Supreme Court, but the Court rejected Meta’s attempt this January. Now, with the case back before the trial court, the judge has ordered the parties to notify potential class members. The class covers potentially millions of individuals and businesses that have paid for ads on Facebook and Instagram since August 15, 2014. This ground-breaking case has Meta describing it as “one of the largest fraud classes in the Ninth Circuit’s history, encompassing millions of diverse advertisers.”
Media Matters sues X
Next, on March 10, nonprofit research center Media Matters for America (MMFA) filed a lawsuit against X Corp. (formerly Twitter). The filing follows multiple lawsuits previously brought by X against MMFA in various courts across the globe, in response to non-profitsresearch report, showing how advertisers’ posts appeared next to neo-Nazi and white nationalist content. MMFA’s research inspired an advertiser boycott that resulted in X losing up to $75 million in ad revenue by the end of 2023 as numerous companies pulled their ads from the site. In its lawsuits, X claimed that MMFA manipulated algorithms on its platform to create misleading images of ads next to racist content, but MMFA argues X’s legal attack is merely “meant to bully X’s critics into silence” and stifle legitimate tech accountability research.
Musk first sued MMFA in Texas federal court, where he has recently relocated his companies, hoping that a friendly judge in the jurisdiction would lead to a favorable result —a practice referred to as “forum shopping.”. After he came under public scrutiny for forum shopping, he changed X’s terms of use so that all legal disputes had to be filed in a Texas federal court where a judge who has been favorable to X is based (but not where X is headquartered). Musk also filed two more lawsuits in Ireland in December 2023 and Singapore in July 2024, each making the same argument that MMFA manipulated the algorithm in its report. Meanwhile, X’s UK subsidiary sent MMFA demand letters threatening further litigation. All these actions are ongoing.
MMFA’s lawsuit alleging that X breached its contract by changing the terms to require lawsuits to be filed in Texas. At the time X filed its lawsuits against MMFA, the nonprofit argues that the terms of use required complaints to be filed in San Francisco (where MMFA filed this complaint). They seek money damages for the breach and an order from the judge forcing Musk to stop the lawsuits in Ireland and Singapore, the company’s alleged “vendetta-driven campaign of libel tourism.”
International developments
As global regulatory scrutiny of tech giants grows, March saw a series of significant legal developments targeting these platforms. From AI training disputes in France to government censorship in India and child data protections in the UK, here are several key cases to watch.
French publishers sue Meta over AI training
On March 12, a coalition of French publishers and authors filed a lawsuit against Meta in a Paris court, alleging that the company used their copyrighted content without authorization to train its AI models. The complaint accuses Meta of infringing intellectual property rights under French law. The case was brought by the National Publishing Union (SME), the National Union of Authors and Composers (SNAC), and the Society of People of Letters (SGDL). According to SNE president Vincent Montagne, the coalition gathered evidence of “massive” copyright breaches and attempted to engage with Meta prior to filing, but received no response. The group has also notified the European Commission, arguing that Meta’s practices violate the EU AI Act and copyright rules.
Under the EU’s AI Act, generative AI systems must comply with the bloc’s copyright laws and disclose the materials used to train their models. The law imposes specific transparency obligations on developers of general-purpose AI, including a requirement to publish detailed summaries of the content used in training datasets — a provision designed to protect creators and uphold copyright protections across the EU.
This lawsuit adds to a growing wave of litigation targeting tech companies for training AI on copyrighted content. In the US, Meta is facing a similar suit in Kadrey v. Meta, brought by a group of authors. As noted in last month’s roundup, related copyright cases involving OpenAI, The New York Times, and visual artists are also working their way through US courts.
X challenges the Indian government over content takedown orders
On March 20, X filed a lawsuit against India’s government over content regulations and censorship of its platform. The case challenges a series of directives issued by the Ministry of Electronics and Information Technology that compel X to remove user posts, many of which reportedly involve criticism of Prime Minister Narendra Modi and his administration. X argues that these orders represent an unlawful expansion of the censorship powers, lacking transparency, judicial oversight, and due process. Enforcing them, X contends, would violate users’ fundamental rights under India’s constitution. This legal challenge marks the latest flashpoint in the tense relationship between X and the Indian government, following previous standoffs over similar issues, including during the 2021 farmers’ protests.
In recent years, Indian authorities have imposed increasingly stringent regulations on social media companies—from Meta to Google—including rules that carry the threat of criminal liability and jail terms for local employees in cases of non-compliance. India’s content moderation approach has increasingly been scrutinized for enabling opaque takedown demands without clear accountability. This lawsuit raises broader questions about government power over online speech and the responsibility of platforms to protect user expression in politically sensitive contexts. It also reflects a growing tension between tech companies and government interests in one of the world’s largest democracies.
UK investigates TikTok and Reddit over children’s data practices
On March 3, UK regulators launched a formal investigation into TikTok and Reddit over potential misuse of children's personal data. The probe will assess whether these platforms are deploying design features—referred to as dark patterns—that inappropriately encourage children to stay online longer and/or share more personal information than necessary. The investigation builds on the UK’s Age Appropriate Design Code, which requires platforms likely to be accessed by children to prioritize their best interests, limit profiling, and ensure that data collection is fair, necessary, and transparent. Regulators are particularly focused on whether the platforms' recommendation systems and engagement tactics may be contributing to addictive behaviors or mental health risks among young users.
According to the Information Commissioner’s Office (ICO), if violations are confirmed, an enforcement action could follow, including financial penalties or mandated platform changes. This investigation builds on earlier regulatory efforts, including a £12.7 million fine issued to TikTok in 2023 for prior breaches of child data protections.
Meta halts ad targeting after legal challenge in the UK
Also in the UK, Meta has agreed to stop serving targeted ads to a woman after she filed a legal complaint in 2022 challenging the company’s profiling and ad-targeting practices under the GDPR. The woman, who had previously experienced mental health difficulties, argued that Facebook’s behavioral ads were intrusive and distressing, especially as she continued to receive ads related to sensitive topics despite her attempts to restrict them.
Rather than contest the case in court, Meta agreed to cease personalized ad targeting against her profile. Lawyers involved described the outcome as a potential milestone for individual enforcement of data rights, particularly in the context of automated decision-making and behavioral profiling under the UK GDPR.
Other developments
- Rideshare app products liability. On March 4, a Missouri state appeals court allowed a case against Lyft to go forward after a lower court had granted summary judgment to the rideshare company. In this case, the plaintiff is the mother of a Lyft driver who was killed in a carjacking incident by two minors, who used a fake account to lure him, as Lyft does not allow minors to use the product. The court allowed the plaintiff’s products liability and negligence claims to proceed, emphasizing Lyft’s app-design failures in protecting drivers from potentially fraudulent or violent riders, among other things. The case was before the Missouri Court of Appeals Eastern District (Ameer v. Lyft, Inc., case no. ED112455).
- North Carolina AG & TikTok. North Carolina Attorney General Jeff Jackson’s office is working to defend a lawsuit filed against TikTok, alleging the app’s design features exploit the developing minds of young users into compulsive and addictive use. TikTok is seeking to dismiss the lawsuit. On March 13, the AG filed a brief claiming that TikTok intentionally exploits minors through defective product design. North Carolina’s suit claims that more than 1 million teens and children in the state use TikTok. The case is before the Wake County Superior Court (North Carolina v. TikTok Inc., case no. 24CV032063).
- New edtech decision. On March 17, a federal judge partially denied dismissal of a class action case brought by parents whose children are required to use the defendant company’s tech products in their public schools. The judge allowed several of their data privacy claims to go forward, including several California statutory claims like the California Invasion of Privacy Act (CIPA) and California’s Comprehensive Data Access and Fraud Act (CDAFA), as well as a California constitutional privacy claim. The case is before the U.S. District Court for the Northern District of California (Cherkin et al. v. Powerschool Holdings, Inc., case no. 24-cv-02706-JD).
- Google settles discrimination suit. On March 18, Google agreed to pay a $28 million settlement to end a class action lawsuit claiming the company favored white and Asian employees, reportedly paying them higher wages and putting them on higher career tracks than other workers. The class included more than 6,600 Google employees in California who worked for the company from 2018 to 2024. The case is before the California Superior Court in Santa Clara County (Cantu v. Google LLC et al, case no. 21CV392049).
- AI copyright and privacy. On March 26, a federal court partially denied OpenAI’s motion to dismiss a copyright lawsuit brought by the New York Times, alleging the company used the publisher’s copyright-protected content to train its AI model without consent or compensation. On March 18, the US Court of Appeals for the District of Columbia issued a unanimous ruling rejecting a bid by an inventor to copyright artwork created exclusively by his AI system. The ruling found that the legal protection of creativity remains available solely for human creators. Additionally, a federal judge in Chicago approved a nationwide class-action settlement to resolve privacy claims against facial recognition company Clearview AI. There were no immediate or specific monetary payouts for victims included in the ruling, although stock options in the violating company may be an option. (New York Times Co. v. Microsoft Co. et al., US District Court for the Sourthern District of New York, case no. 1:23-cv-11195-SHS-OTW; Thaler v. Perlumutter, US Court of Appeals for the District of Columbia, No. 23-5233; Clearview AI, Inc., Consumer Privacy Litigation, US District Court for the Northern District of Illinois, case no. 1:21-cv-00135).
- DOGE updates. As covered in last month’s roundup, the Department of Government Efficiency has been party to multiple recent rulings concerning its ability to access sensitive information across multiple federal agencies. On March 20, US District Judge Ellen Lipton Hollander issued a temporary restraining order barring Musk and his team from accessing personally identifiable information from the Social Security Administration, accusing DOGE of launching a “fishing expedition” of the SSA. Additionally, on March 7, US District Judge Colleen Kollar-Kotelly denied a request to block DOGE staff from accessing a sensitive federal payment system at the Treasury Department. (AFSCME v. Social Security Administration, US District Court for the District of Maryland, case no. 1:25-cv-00596-ELH; Alliance for Retired Americans v. Bessent, US District Court for the District of Columbia, case no. 1:25-cv-00313-CKK).
- Children’s online safety litigation. On March 31, a federal court enjoined a first-of-its-kind Arkansas law that requires social media platforms to verify the age of all account holders in the state, with those under 18 only able to access accounts with parental permission. On March 21, a federal court dismissed NetChoice’s lawsuit challenging a Florida law that restricted children under the age of 16 from opening accounts on certain social media platforms. A week later, the tech industry group refiled its Complaint against the law. On March 18, tech trade group NetChoice filed a new lawsuit against an Age Verification Law in Louisiana, claiming it violated the First and Fourteenth Amendments. Lastly, on March 13, a federal court granted NetChoice’s second request for a preliminary injunction in their suit against California’s Age-Appropriate Design Code Act. (NetChoice LLC v. Griffin, US District Court for the Western District of Arkansas, case no. 5:23-cv-5105; CCIA & NetChoice v. Uthmeier, US District Court for the Northern District of Florida, case no. 4:24-cv-438-MW/MAF; NetChoice v. Murrill, US District Court for the Middle District of Louisiana, case no. 3:25-cv-00231; NetChoice v. Bonta, US District Court for the Northern District of California, case no. 22-cv-08861-BLF).
- New GDPR complaint against OpenAI. On March 20, the European Center for Digital Rights filed a new complaint against OpenAI, alleging several violations of the GDPR. The complaint concerns a Norwegian man’s use of ChatGPT, where the product’s outputs generated a false story in which he killed his children and was sentenced to decades in prison. The complaint seeks an order requiring OpenAI to delete the defamatory output and fine-tune its model to eliminate such results, compel OpenAI to restrict processing the man’s personal data, and impose a fine under the GDPR.
- Web-tracking consent decision. On March 24, a federal court relied on a website’s privacy policy, hyperlinked in a footer, to rule that the plaintiff had constructive notice of the privacy policy and thus had implicitly consented to the website’s use of third-party user tracking software. The court granted summary judgment to the website on her Pennsylvania Wiretap Act claims, setting an important precedent for the hundreds of similar cases challenging adtech trackers across the nation. (Popa v. Harriet Carter Gifts, Inc., US District Court for the Western District of Pennsylvania, case no. 2:19-cv-450).
The Tech Justice Law Project (TJLP) maintains a regularly updated litigation tracker gathering tech-related cases in the US, federal, state, and international courts. To help ensure TJLP’s tracker is as complete and up-to-date as possible, readers can use this form to suggest new cases or propose edits. TJLP also welcomes readers to provide additional information and suggestions for future roundups here. Send additional thoughts and suggestions to info@techjusticelaw.org.
Authors
