Home

Donate
Perspective

What the AI Whistleblower Protection Act Would Mean for Tech Workers

Sophie Luskin / May 30, 2025

On May 15, Senate Judiciary Chairman Chuck Grassley (R-IA) introduced the AI Whistleblower Protection Act (AIWPA), a bipartisan bill to protect individuals who disclose information regarding a potential artificial intelligence security vulnerability or violation. Under the bill, these whistleblowers can be current or former employees and independent contractors, and like other measures protecting against whistleblower retaliation, it does not require that they prove laws have been broken to be covered, only that they act in good faith in flagging a possible violation.

This bill comes at a critical moment in AI safety at which we must cement transparency and accountability standards in the public interest via regulation. With a wave of deregulation potentially coming at the state and federal level, whistleblower protections may well become even more crucial in holding the AI industry accountable.

Recent developments in Washington underscore this expected trend. On May 8, tech industry leaders urged legislators to employ “light touch” regulation in a Senate hearing on the global AI race, and on May 22, the House passed a budget reconciliation bill containing a provision that would bar states’ ability to regulate AI with a moratorium.

This laissez-faire regulatory environment is likely to further entrench law enforcement’s reliance on whistleblower disclosures. When an industry is under-regulated, authorities need the information provided by employees to properly investigate misconduct and violations of law. Government agency whistleblower programs — including those at the Securities and Exchange Commission (SEC), Commodity Futures Trading Commission (CFTC), Internal Revenue Service (IRS), and Department of Justice — depend heavily on information provided by whistleblowers as a roadmap for discovery, as these agencies would never have discovered the full scope of misconduct through their own investigations. Analysis of the DOJ’s 2023 fiscal year fraud statistics reveals that of the civil funds the US recovered in cases from 1987 to 2023, whistleblower tips accounted for around 70%, totaling nearly $53 billion.

Whistleblowing is a particularly salient issue for Grassley, dating back to his work reintroducing a modernized and strengthened False Claims Act in 1986. Last year, his office obtained a letter from whistleblowers to then-chair of the SEC Gary Gensler regarding OpenAI’s allegedly illegal nondisclosure and non-disparagement agreements that purportedly failed to exempt disclosures of securities violations to the SEC, required pre-approval for federal disclosures, mandated confidentiality about violations, and forced workers to forfeit whistleblower compensation. The company claims to have since rectified such agreements.

As Grassley stated in the bill’s press release, “Whistleblowers are one of the best ways to ensure Congress keeps pace as the AI industry rapidly develops.”

In the bill, an AI security vulnerability refers to any security breach or vulnerability that could enable the unauthorized acquisition of the technology being developed at a company by individuals or foreign entities through theft or other illicit means. The term "AI violation" in the bill means any breach of federal law occurring during or related to the development, deployment, or use of artificial intelligence, or any inadequate response to a concrete and specific risk that the development, deployment, or use of artificial intelligence may pose to public safety, public health, or national security.

The scope of who a disclosure can be made to under the anti-retaliation protections is very broad — it permits reporting to most federal law enforcement or regulatory agencies, the attorney general, or Congress, and covers individuals who make disclosures internally via existing company compliance programs or to their supervisors. It provides protections for individuals' testimony in administrative and judicial proceedings, and covers aiding in government investigations. It contains anti-retaliation protections that prohibit employers from discharging, demoting, suspending, threatening, blacklisting, or harassing any covered individuals related to their protected activity.

AI whistleblowers who experience retribution would gain the ability to submit grievances to the Labor Department and pursue remedies through federal courts, including job restoration, twice the amount of back wages owed, and compensation for damages. The legislation also explicitly states that these protections for AI whistleblowers cannot be surrendered through employment contracts or forced arbitration clauses.

The bill follows a long line of sector-specific legislation for whistleblower protections, granting rights to employees in developing or previously unregulated fields. Congress has repeatedly enacted whistleblower protection laws that cover employees across relevant industries or sectors, such as nuclear energy in the 1978 Energy Reorganization Act, the federal government in the 1989 Whistleblower Protection Act, airlines under AIR21 in 2000, and Wall Street under Dodd-Frank in 2010.

While the bill offers critical protections, its definition of AI violations ought to still be expanded to

include when individuals report that their employer has failed to follow its internal safety and security protocols. Such a statute would have protected many of the concerns brought forward by employees over the past few years to the press, such as OpenAI’s rushed testing protocols for GPT-4 Omni last year. By expanding the definition of AI violation to cover circumvention of internal protocol, a wider breadth of concerns regarding misconduct that impacts the public interest can be brought forth and taken up by the government.

Merely by bringing attention to this issue, the bill could help abate fears around whistleblowing in the AI industry and ensure that all employees are fully informed of the breadth of their rights to come forward, beyond what is covered by government agency whistleblower programs.

The AIWPA does not directly offer monetary awards for those who come forward. Typically, these are only afforded by Congress when it passes overarching industry standards, which it has not yet done in the AI space. However, there are cases where AI employees can still seek protections under Dodd-Frank — such as when a publicly traded company or company registered with the SEC violates securities laws in the implementation, deployment, marketing, or solicitation of investors of their technology. Congress must consider AI-specific employee protections in any AI regulation it passes.

When regulation of an industry lags behind development and deployment, whistleblower laws have time and time again helped fill the gap. Safe venues to disclose potential violations to the correct law enforcement agency improve the government’s ability to regulate, offering a window into the conduct of companies that they would not have otherwise. The protections for those who disclose to a person with supervisory authority over them or someone with the authority to investigate, discover, or terminate the misconduct, if executed properly, would help create a stronger risk management culture within the organization, countering the current chilling effect on internal reporting.

Authors

Sophie Luskin
Sophie Luskin is a researcher with Princeton's Center for Information Technology Policy studying regulation, issues, and impacts around generative AI for companionship, social and peer media platforms, age assurance, and consumer privacy to protect users and promote responsible deployment. Luskin be...

Related

Transcript
Transcript: Former Exec Sarah Wynn-Williams Testifies on Facebook’s Courtship of ChinaApril 10, 2025
Podcast
Will a Moratorium on State AI Laws Advance in the US Senate?May 25, 2025
Analysis
Expert Perspectives on 10-Year Moratorium on Enforcement of US State AI LawsMay 23, 2025
Perspective
Why Tracking The Location Of AI Chips Is a Mirage — and a RiskMay 29, 2025

Topics