Home

Donate
Perspective

When Algorithms Learn to Discriminate: The Hidden Crisis of Emergent Ableism

Sergey Kornilov / Jul 25, 2025

Hanna Barakat & Cambridge Diversity Fund / Turning Threads of Cognition by Hanna Barakat & Cambridge Diversity Fund / Better Images of AI

The Equal Employment Opportunity Commission's $365,000 settlement with iTutorGroup in 2023 was straightforward: the résumé-screening software automatically rejected women over 55 and men over 60. This was a clear case of age discrimination, with hard-coded rules and an obvious fix.

But a 2024 American Civil Liberties Union (ACLU) complaint to the Federal Trade Commission reveals something far more troubling. Inthe legal complaint, the ACLU alleged that products from Aon Consulting, Inc., a major hiring technology vendor, “assess very general personality traits such as positivity, emotional awareness, liveliness, ambition, and drive that are not clearly job related or necessary for a specific job and can unfairly screen out people based on disabilities.” The ACLU’s client in the case was a biracial job applicant with autism who faced discrimination through Aon's widely used hiring assessments.

But unlike ITutorGroup’s code-level exclusion, Aon’s system alleged discrimination was far more opaque–its bias buried not in an explicit rule, but in statistical patterns. Aon's ADEPT-15 personality test was never programmed to detect disability. It didn't need to be.

I call this phenomenon emergent ableism, a form of discrimination that arises when pattern-matching algorithms encounter human cognitive diversity. No malicious intent required.

For example, the system, like in Aon’s case, may use questions that mirror clinical diagnostic criteria, like "I prefer working alone" or "I focus intensely on details." When neurodivergent applicants answer honestly, the algorithm assigns lower desirability scores to their response patterns, which deviate from the “successful” neurotypical profiles it was trained to prefer. The applicant in the ACLU complaint never disclosed their autism diagnosis, yet the assessment may have functioned as a proxy screening tool.

This pattern extends beyond individual cases. Dr. Sam Brandsen's research at Duke University shows that AI language models systematically associate neurodivergent terms and concepts, such as “I have autism,” with more negative connotations than “I am a bank robber.” When these same language models power hiring tools, they embed discriminatory associations without explicit programming.

Why emergent ableism threatens more than civil rights

The hiring technology sector has moved far beyond keyword matching. Platforms from Workday, HireVue, and Pymetrics analyze vocal patterns, micro-expressions, game-based behavior, and response times. They compare these behavioral signatures to profiles of "successful" employees. When historical high performers share neurotypical communication styles, the algorithm may learn to penalize monotone speech, tangential storytelling, or variable eye contact. HireVue's own experience demonstrates the urgency: the company abandoned facial expression analysis in 2021 after research revealed it systematically penalized individuals with autism whose eye movements and expressions differ from neurotypical patterns.

Multiple federal agencies have already deployed these systems. The FDA procured HireVue to measure “behavioral and performance-based attributes” of candidates for 950 positions. The Army mandates TAPAS (Tailored Adaptive Personality Assessment System), a computer-based forced-choice test that analyzes behavioral responses to predict performance, discipline problems, and attrition risk, and is expanding into behavior-based and AI-powered recruitment. These systems can learn to identify neurodivergent traits through behavioral preferences and patterns, timing, and communication styles, creating systematic exclusion from federal employment without explicit disability queries.

The use of these systems extends into housing and credit, where behavioral signals can harden into structural barriers. The Department of Justice's $2.3 million settlement with SafeRent over algorithmic tenant-screening practices shows how quickly these signals cascade into exclusion, but it addressed just one company among thousands using similar technology. The impacts of these patterns aren’t just discriminatory, but impose real harm on well-being. A 2022 Stanford study found that adults with ADHD incur late-payment penalties at higher rates, compared to their neurotypical peers, not from lack of funds or the motivation to be good tenants, but due to their neurobiological and executive function differences that can negatively affect payment timing. When screening algorithms treat these patterns as evidence of severe credit risk, housing can become systematically inaccessible.

Why current law fails

The Americans with Disabilities Act and Fair Housing Act were written for a world of human decision-makers and identifiable policies. Under the disparate impact doctrine, plaintiffs need to identify the specific rule causing discrimination. But in a neural network with millions of parameters, where is the “rule”?

Even when the Supreme Court affirmed disparate impact theory under the Fair Housing Act in Texas Department of Housing v. Inclusive Communities Project (2015), it emphasized the need to identify specific policies causing discrimination — a standard designed for human decision-makers, not algorithmic systems that continuously evolve their parameters. The fact that federal courts require proof of discriminatory intent seems out of step when the decision-maker engaging in discrimination can be a gradient descent algorithm.

Regulators are scrambling to catch up. The EEOC's 2022 guidance confirmed AI-based disability screen-outs violate the Americans with Disabilities Act, but assumes companies can audit for bias using demographic data that they are forbidden to collect. While New York City's Local Law 144 mandates algorithmic auditing only for race and gender in hiring, the New York State Senate recently passed the New York AI Act, which would regulate automated decision-making across employment, education, housing, and healthcare – requiring independent audits and creating enforcement mechanisms. Yet cognitive diversity remains largely invisible to regulators nationwide.

Technical solutions exist

Despite the legal and regulatory gaps, the technology to detect and correct emergent ableism already exists. Privacy laws rightly prevent companies from collecting disability data, but the detection of bias doesn't require medical records or other related private information. Computer scientists have developed "fairness without demographics" methods that are especially relevant to preventing emergent ableism.

One such method is counterfactual testing, which asks a question: “Would this person’s risk score change if they paid rent on the 1st versus the 5th?” If yes, but their payment is identical, the algorithm penalizes executive function differences, not financial risk. This technique helps expose hidden correlations between behavioral patterns and exclusionary outcomes.

Another approach is adversarial debiasing. Here, the techniquetrains secondary models to predict disability status solely from primary model outputs. If the secondary model can accurately guess whether someone has ADHD or another neurodivergent condition based on how the primary algorithm scored them, this reveals discriminatory bias that can then be corrected–even if those traits were never directly measured.

A third strategy, behavioral cluster analysis, sidesteps diagnoses entirely. Instead of looking for specific characteristics, it groups users by interaction patterns (e.g., rapid keystrokes, irregular clicking, non-linear navigation). Analysts can then evaluate whether users in certain behavioral clusters experience systematically worse outcomes. If so, discrimination is present, regardless of whether any individual user has a known disability.

These methods and their more sophisticated extensions are mature, rigorous, peer-reviewed, validated, and ready for deployment. What's missing is the legal requirement to use them and the institutional support to deploy them at scale.

The policy roadmap

Artificial intelligence will shape the public’s access to opportunity, whether policymakers act or not, and will do so at an increasing rate and scale.

Long-term solutions will require Congress to amend the ADA to explicitly cover statistical discrimination in automated systems and mandate behavioral-bias audits for any AI touching federal funds. But while federal enforcement faces political headwinds under the current administration, with rollbacks of DEI initiatives and resistance to AI regulation both taking center stage, states possess powerful tools to combat emergent ableism. They are already regulating algorithmic hiring for race and gender. Extending that scaffolding to cognitive diversity would cost little and protect millions.

The ACLU's complaint against Aon provides a roadmap: when hiring tools use personality questions that mirror clinical diagnostic criteria, they function as disability screening tools subject to ADA compliance requirements. Here is what states can do now:

First, expand algorithmic auditing laws. New York already requires bias testing for race and gender in hiring algorithms. California has multiple bills pending that would establish similar requirements. Adding neurodiversity metrics using behavioral cluster analysis would cost little but protect millions.

Second, leverage consumer protection authority. State Attorneys General can pursue companies whose AI systems exhibit statistical discrimination patterns, building on recent precedents like SafeRent.

Third, leverage procurement power. States purchase billions in software annually. They can use their procurement power to require the vendors to demonstrate that their AI does not exclude neurodivergent individuals and users, creating market pressure for inclusive design.

Fourth, guarantee human review rights for high-stakes decisions in employment, housing, healthcare, and lending, mandating appeals to human decision makers when access is denied algorithmically.

Finally, partner with neurodiversity leaders. Companies like Microsoft, SAP, and JP Morgan Chase actively recruit neurodivergent talent. States can work closely with these leaders to establish best practices and create a significant competitive advantage for inclusive businesses.

The innovation imperative

Roughly one in five Americans identify as neurodivergent, living with ADHD, autism, developmental language disorders, or conditions that shape how their brains think, process information, and make decisions. When algorithms systematically eliminate one-fifth of the population from employment, housing, and healthcare, the harm extends beyond civil rights. The Pentagon and other agencies recruit neurodivergent talent for pattern recognition abilities. Wall Street and Silicon Valley's greatest innovations often spring from unconventional minds. Yet statistical shortcuts systematically filter out this cognitive diversity.

We face a choice. We can allow emergent ableism to embed in every algorithm touching American life. Alternatively, we can utilize the same technology to build more inclusive systems—systems that identify exclusionary patterns, recommend accommodations, and expand the talent pipeline that fuels innovation.

The path forward is clear. States should lead where federal action stalls. Technical solutions exist. And every organization that depends on human talent has a stake in ensuring the future is built for all.

Authors

Sergey Kornilov
Dr. Sergey Kornilov is a behavioral, molecular, and translational scientist with ADHD who lives in the gap between neurotypical system design and neurodivergent reality. He holds two PhDs in psychology and has published over 65 peer-reviewed articles and chapters on assessment, cognition, language, ...

Related

Podcast
Centering Disability Rights in US Tech Policy 35 Years After ADAJuly 24, 2025
Perspective
The Proposed AI Moratorium Will Harm People with Disabilities. Here’s How.June 19, 2025
Analysis
DOGE & Disability Rights: Three Key Tech Policy ConcernsMay 12, 2025

Topics