The Trump Administration's War on 'DEI' Will Enable AI-Powered Job Discrimination
Meher Sethi / Jun 27, 2025
Washington, DC - January 21, 2025: President Donald Trump poses in front of a stack of executive orders. (The White House)
The Trump administration’s war on 'DEI' has veered from the ridiculous to the dangerous. In a meme-worthy moment, officials scrubbed government websites of references to the Enola Gay—a World War II aircraft—because of its 'woke' name. But the administration is also targeting something far more consequential: the basic protections of the 1964 Civil Rights Act against employment discrimination, with especially alarming and largely overlooked implications for workers in the age of algorithmic hiring.
In April, President Trump signed the “Restoring Equality of Opportunity and Meritocracy” executive order, which claims to ensure fairness by emphasizing “equality of opportunity” over “equality of outcome.” Its primary target is the doctrine of disparate-impact liability—a foundational civil rights principle that the executive order grossly misinterprets. The Trump administration argues that disparate-impact liability requires “discrimination to achieve predetermined, race-oriented outcomes,” and “hinders businesses from making merit-based hiring decisions.” That is flatly wrong on the law and, perhaps more importantly, harmful for working-class Americans.
Disparate-impact liability is not a recent invention of DEI or the 'woke agenda.' It is a legal standard that dates back to 1971, originating from a landmark Supreme Court decision in the case Griggs v. Duke Power Co. In the 1950s Jim Crow South, Duke Power restricted Black employees to its lowest-paying "Labor” department. After the 1964 Civil Rights Act’s Title VII banned discrimination in employment, the company implemented new requirements for promotion to better-paying jobs within the company: either a high school diploma or two aptitude tests.
But the Court found that the new requirements had virtually nothing to do with the job, disproportionately excluding Black workers, who—due to Jim Crow-era barriers to education—were far less likely to have diplomas. Based on the evidence, the Court concluded that the new requirements had the functional effect of preserving the same discriminatory policies that existed before, restricting Black employees to low-paying jobs. Duke Power was not guilty of explicit ‘disparate treatment’ of Black employees—insofar as its policy was not overtly segregational—yet the Court nonetheless found that an unfair ‘disparate impact’ rooted in no legitimate business justification was enough to contradict the goals of the Civil Rights Act.
In a famous analogy, the Supreme Court likened Duke Power’s hiring policy to Aesop’s “fabled offer of milk to the stork and the fox”—if both are offered milk in a narrow vase, only the stork can access it with its long beak; if both are offered milk in a shallow bowl, only the fox can lap it up. Thus, equality of opportunity is only met “provided that the vessel in which the milk is proffered be one all seekers can use.”
As the Supreme Court’s ruling in Griggs clarified, disproportionate differences in hiring outcomes based on fair processes related to real job qualifications are acceptable. However, hiring practices, policies, and requirements that may seem neutral on their face yet still have the functional effect of unfairly discriminating—for no legitimate business reason—do violate the law. Disparate impact is meritocracy properly realized. Over the law’s history, the Department of Justice Civil Rights Division’s Employment Litigation Section—tasked with enforcing Title VII—successfully struck down minimum height and weight requirements and other similar standards for prison guards, firefighters, and police officers, because they disproportionately excluded women and were shown not to be necessary for job performance.
I spent a good portion of last year transcribing lengthy interview footage of the lawyers who worked on those landmark cases for the Yale Law School Living Civil Rights Law Project archives; many attorneys echoed fears that future presidential administrations hostile to disparate-impact theory would do a significant disservice to the law and to the vulnerable working-class groups finally getting a fair shot. And that’s precisely what the current administration has done. President Trump’s executive order sidelines all civil rights enforcement related to disparate-impact liability, and it could not have come at a worse time. The move has been framed as a part of a larger effort against DEI, but among its chief yet potentially overlooked consequences is that it weakens federal enforcement to address automated discrimination.
In the last several years, the hiring process has become increasingly automated. Former Equal Employment Opportunity Commission (EEOC) Chair Charlotte Burrows testified at a hearing that 83% of employers—including 99% of Fortune 500 companies—utilize automation in some aspect of their hiring processes. Such employment-related software tools can be used to facilitate otherwise textbook discrimination—explicit “disparate treatment” of applicants based on protected characteristics.
For example, in 2022, the EEOC reached the first-of-its-kind settlement with iTutorGroup, which had explicitly programmed its application review software to automatically reject female applicants over 55 and male applicants over 60. As some state attorneys general have articulated, such cases are little more than plain and simple “disparate treatment”—intentional discrimination by employers—in digital form.
However, the particularly concerning cases of algorithmic discrimination are not the product of intentional exclusion by discriminatory employers. Modern artificial intelligence (AI) is, generally speaking, programmed to identify patterns within very large sets of training data. Therefore, biases within the training data can themselves be reflected in the machine’s behavior. Numerous studies have demonstrated that AI algorithms trained on biased datasets prefer résumés with white-associated names over résumés with Black-associated names, or résumés with male-associated names over résumés with female-associated names.
These biases are particularly difficult to detect, trace, and enforce because they reside within the opaque 'black box' of complex AI systems. AI tools may simply be identifying the wrong kinds of patterns, such as learning to identify rabbits by looking for the green grassy backgrounds contained in most of the training data, only to be stumped in classifying a photo of a rabbit on a patch of concrete.
For example, in 2015, Amazon abandoned its in-house AI recruiting tool after it was found to systematically downgrade résumés that included the word “women’s,” such as “women’s rugby team.” The system had been trained on résumés submitted over a ten-year period—most of which came from men—causing it to develop a preference for male-coded language. And as the American Civil Liberties Union (ACLU) wrote, even if the AI tool’s programmers never intended to discriminate, such discrimination would be illegal on a disparate-impact theory.
The ACLU similarly relied on a disparate-impact theory in its complaint against HireVue’s video interview platform and automated speech recognition systems, which were alleged to have contained biases against non-white and disabled people. In another recent example, a California federal judge last year dismissed disparate-treatment claims—the plain, intentional kind of discrimination —against Workday’s AI-powered software, but allowed disparate-impact claims to proceed, based on allegations by an applicant that the screening software was biased against Black people. In considering the variety of policy approaches to AI discrimination, the Brookings Institution described disparate impact as the legal doctrine “key to preventing AI discrimination.”
This was precisely the compliance guidance given to employers by the Biden administration’s EEOC—that AI tools used for screening, hiring, or promoting may have discriminatory disparate impacts based on protected characteristics, and should therefore be routinely audited by employers to ensure compliance with civil rights law. Biases against protected characteristics latent within AI tools have no legitimate business justifications. Indeed, this kind of AI discrimination is precisely what disparate-impact liability is designed to address: hiring practices that disproportionately exclude candidates on the basis of protected characteristics—divorced from any legitimate job-related purpose—undermine the merit-based employment process the Civil Rights Act was meant to protect.
Yet within a week of taking office, the Trump administration promptly revoked that guidance, and in the aforementioned Executive Order, it directed executive branch enforcers to abandon their disparate-impact cases entirely. The federal government has historically been the most well-resourced enforcer of these protections, making the administration’s move to sideline disparate-impact enforcement a devastating setback.
Rather than dismantling this pillar of civil rights law, we should be expanding it, including by strengthening state laws that mandate audits of algorithmic bias, supporting private enforcement actions, and equipping state attorneys general to step in where federal enforcement retreats. Instead, congressional Republicans are seeking to void all state laws regulating AI, thereby unraveling any progress made by states to combat AI-driven discrimination.
The administration may try to frame this as a war on 'woke DEI,' but make no mistake: it’s a targeted rollback of hard-won, decades-old protections for workers, especially those from marginalized communities. The consequences of this legal shift are acute and concerning in an era of automated and AI-driven discrimination. In a world where machines increasingly make life-changing decisions, we can’t afford to abandon the only laws we have that ask whether those decisions are fair.
Authors
