Home

Donate
Perspective

Congress Is About to Hand Corporate America a License to Discriminate

Jason Solomon, Abby Frerick / Jun 30, 2025

The US Senate Chamber in 1873 in a restored image from a glass negative. Brady-Handy Photograph Collection (Library of Congress), Public domain, via Wikimedia Commons

D.K. is an Indigenous and Deaf woman who did customer service for Intuit’s TurboTax, receiving positive supervisor and customer feedback every year. In the spring of 2024, a supervisor encouraged D.K. to apply for a manager position. However, the company used an AI video interview platform to review candidates. These types of systems are known to systematically rate non-white and deaf speakers lower, yet Intuit did not provide D.K.’s requested accommodation. Intuit ultimately rejected D.K. for the position, and feedback from the interview suggested she “practice active listening.”

Derek Mobley is a 51-year-old Black man who suffers from depression and anxiety. After being laid off in 2017, Mobley applied for more than 100 jobs through Workday's AI-powered hiring platform. For many applications, Mobley had to take a Workday-branded assessment or personality test. Despite being well qualified, he faced a pattern of immediate rejections, often within hours of submitting applications to Workday-powered systems. Mobley suspects the platform's AI algorithms discriminated against him based on his age, race, and mental health status.

D.K. and Derek Mobley are among the faces of 21st century discrimination in the United States, and with an estimated 98.4% of Fortune 500 companies using AI in the hiring process, the potential scale of AI discrimination is immense. Yet Congress is about to make the problem worse.

A two-pronged attack on civil rights

As AI advances, the Trump administration has launched an unprecedented attack on civil rights enforcement. Within days of taking office, federal agencies began rolling back existing AI policies and directives, with the Equal Employment Opportunity Commission and the Department of Labor removing workplace AI discrimination guidelines from their websites. This represents just one front in a broader campaign to weaken civil rights protections that have safeguarded Americans for decades.

Against this backdrop, Congress is considering a provision in the budget bill that would ban states from regulating AI, eliminating possible oversight of AI discrimination with no federal replacement. This regulatory vacuum will create a perverse incentive: companies will rush to adopt discriminatory AI systems because they're cheaper than human decision-makers and difficult to challenge in court under existing laws.

The two developments are related. Together they’re a one-two punch — first eliminate federal oversight, then preempt the states—to ordinary Americans hoping for a fair shot at a job, a loan, or coverage of medical bills. The result would be a pass for corporate interests when we need protection and accountability the most.

Understanding why requires looking at how discrimination law actually works. Federal civil rights laws weren't designed for the age of black-box algorithms.

Take Title VII of the Civil Rights Act, which prohibits workplace discrimination. You can bring a claim in one of two ways. The first is by showing “disparate treatment” based on race, gender, or another protected characteristic; in such cases, courts will look for evidence of “animus” or discriminatory intent, such as a male supervisor who has made comments about not wanting to work with women, or an employee on the hiring team who has referred to Black candidates as DEI hires.

The second way is what’s called “disparate impact,” where policies or practices disproportionately harm people of a particular race, gender, or other trait. Unlike “disparate treatment” claims, which require proof of intentional bias, disparate impact focuses on unnecessary barriers, recognizing that discrimination often operates through facially neutral practices. In many cases, this is how AI systems work: treating people the same on its face but with a different effect.

To bring a disparate impact case, a person has to identify the specific hiring practice – say, use of an AI screening tool – causing disproportionate harm. Indeed, AI tools frequently cause such harm: recent research found that AI screening systems selected resumes with white-associated names 85% of the time.

Or consider D.K., the Deaf Indigenous job applicant in Colorado. There is no reason to believe her employer or the developer of the AI video software intended to discriminate against Deaf or Indigenous people. But the software – and the employer’s decision to use it – kept well-qualified people like D.K. from having a fair shot. To be sure, the employer can still show that the AI video software is necessary to their operations, but if there is a less harmful alternative not being used, then it’s discrimination.

Disparate impact liability was a target in the billionaire-funded Project 2025, and President Trump has followed suit. He recently signed an executive order titled "Restoring Equality of Opportunity and Meritocracy" that attempts to dismantle disparate impact protections across the federal government. While individuals can still bring private disparate impact lawsuits under Title VII and other civil rights laws, they'll now face this battle without federal agencies as allies.

Even if the attack on disparate impact fails, the disparate impact approach may not be enough. Discrimination victims will need to show that the AI caused the harm, but even experts struggle to draw clear lines between inputs and outcomes in complex models. And most AI vendors treat their systems as trade secrets, blocking access to the very data needed to prove discrimination. Courts often demand this proof early, before plaintiffs even have access to the documentation needed.

Put simply: US civil rights laws were built for human decisions, not black-box code. Without updates, the protections we’ve relied on for decades will crumble under the weight of opaque, unaccountable AI systems.

The need for state leadership

This is precisely why developing new laws in the states has become essential. The federal provision under consideration would forbid states from developing any new AI regulations and effectively nullify existing state laws, including basic transparency requirements like those in Colorado. Colorado’s law simply requires deployers of high-risk AI systems to use reasonable care to protect individuals from any known or reasonably foreseeable risks of algorithmic discrimination.

Colorado's approach doesn't include sweeping bans on AI. Instead, it informs workers and consumers when algorithms are deciding their fate and provides the data and information needed to contest discrimination. These modest protections represent exactly the kind of innovative, practical solutions that states can develop when federal leadership fails.

Going forward

AI has immense potential. In healthcare, research indicates AI can help doctors forecast patient survival across multiple cancer types and predict patient response to standard treatments. In education, AI-powered systems can adapt to individual learning styles and provide personalized tutoring at scale. But without proper oversight, AI systems can perpetuate discrimination.

Congress must reject this AI provision immediately. The 40 bipartisan attorneys general who called it "irresponsible" understand what's at stake: the systematic erosion of civil rights protections due to a false choice between common-sense regulation and technological progress.

If Congress won't lead on AI accountability, then it has no business stopping states from protecting their own citizens. As we embrace AI's benefits, we must stand up against discrimination and not abandon our commitment to equal opportunity.

We can have innovation and civil rights – but only if we demand both.

Authors

Jason Solomon
Jason Solomon is the Director of the National Institute of Workers’ Rights (NIWR), an advocacy organization focused on making it harder for employers to violate workers’ rights and easier for workers to fight back. Before joining NIWR, Jason was Executive Director of the Deborah L. Rhode Center on t...
Abby Frerick
Abby Frerick is the Paul H. Tobias Fellow at the National Institute for Workers’ Rights.

Related

Perspective
The Trump Administration's War on 'DEI' Will Enable AI-Powered Job DiscriminationJune 27, 2025
Podcast
Considering a New 'Civil Rights Approach to AI'May 29, 2025

Topics