Home

Donate
Perspective

Why Content Moderation Must Account for Disability

Tithi Neogi / Nov 13, 2025

A depiction of social media apps scattered across a table. (Image via www.vpnsrus.com)

If you are on social media and do not live under a rock, you may have seen artificial intelligence-generated videos of famous theoretical physicist and wheelchair user Stephen Hawking being horribly brutalized in myriad violent acts. As you casually scroll past these distasteful videos of Hawking, including ones of him being tossed around in a WWE ring, you may pause briefly on a reel wherein United States President Donald Trump claims to have found a cause for autism. Further doomscrolling still might push you into the manosphere, where men promote physical strength to subjugate and podcasters liberally use the adjective “retarded” as a symbol of a “great cultural victory.”

Digital media platforms are increasingly spurning content that depicts, promotes and normalizes violence and hatred, both real and imagined, against persons with disabilities.

These online instances of hatred and violence against persons with disabilities aggravate rising patterns of intolerance against the community in real world scenarios.

Thus, it becomes imperative to identify these instances of ableism as they occur, and also call out ableism as a specific and under-addressed form of online harm on these platforms. Places like India have illustrated how seemingly "innocuous" ableist content can persist online despite judicial or legislative intent to course correct. Because of this, platforms must explore potential pathways to achieving safer, inclusive online spaces for persons with disabilities.

Ableism as an online harm

Ableism is a set of beliefs, processes and practices that projects a particular kind of bodily standard as ideal, species-typical and fully human, and views disability as a diminished state of being human. Ableist behaviors and practices can range from paternalistic, infantilizing and patronizing towards persons with disabilities to downright discriminatory and abusive.

Besides larger and more visible acts of discrimination, ableist microaggressions, or subtle insults steeped in negative perceptions of disability, can perpetuate inequality and ongoing marginalization of persons with disabilities. Research on ableist microaggressions on social media reveals that ableism online can manifest in many ways — backhanded comments that paternalize, infantilize or invade privacy of a person with disability (such as comments questioning how someone with a disability can get intimate with their partner); being ignored (i.e. “ghosted”); exclusionary content moderation practices (for example, a platform recently flagged a user’s videos and banned her from livestreaming after mistaking her as underaged due to her dwarfism); and sharing inaccessible content (like pictures without captions and alt text).

While gender, race and sexuality and their intersections with online harms on social media platforms have been carefully studied, ableism as an online harm remains a lesser studied factor.

Ableism on social media deserves targeted attention as we see increasingly aggressive behaviors towards persons with disabilities, such as depictions of violence on generative AI platforms. Similarly, when a head of state celebrates discovering an unfounded “cause” for autism (Tylenol, a safe to use, over-the-counter drug), a spate of misinformation with eugenicist undertones is likely to spread on social media platforms.

India shows speech codes are ineffective against ableist microaggressions

India has taken a judicial approach to regulating online ableism.

A petition before India’s Supreme Court alleged that some stand-up comedians had made derogatory remarks about persons with Spinal Muscular Atrophy (SMA) on YouTube, and sought a prohibition against ableist content on digital platforms and their speech guidelines. The Court viewed this case as commercial speech, undeserving of free speech protections. It also directed the government to issue guidelines that regulate online content and safeguard the dignity and rights of persons with disabilities.

Experts rightly expressed their concerns about the Court’s decision having a chilling effect on free speech. However, what many failed to notice was that the Court’s decision, focused on regulating speech through guidelines, would do little to curb the spread of ableist content that circulates organically on social media — like the Hakla Shah Rukh meme that had already gone viral.

The Hakla (stammerer in Hindi) meme is a morphed image of Bollywood superstar Shah Rukh Khan in a bizarre hairstyle, which ridicules his character’s stutter in the film Darr. Despite reported takedown attempts by the actor’s team and platform removal measures, the meme continued to be spammed on social media platforms. Any discourse on the ableist undertones of the viral meme was largely absent in mass media. Platforms viewed this as a policy violation tied to bullying and harassment (of the actor), and the meme flew under the radar of news reports as something bizarre and funny, and not hateful.

Platforms fall short of moderating ableism

Platforms’ content moderation practices often do not adequately address reports of disability-based harassment, and offer limited tools for redressal that can only hide hateful content. The root cause of this limitation is that platforms in many cases do not recognize ableism as hate speech, and hence do not enable removing it.

The layered and intersectional nature of online hate makes moderation more complex. For instance, studies indicate that LGBTQ disabled content creators experience more ableist hate compared to non-LGBTQ creators. Persons with disabilities also face false censorship on platforms — that is, their content is wrongly flagged or removed.

An empirical study documenting the moderation experiences of 20 blind users on TikTok (referred to as BlindTokers) illustrates this. Some BlindTokers responded strongly to disability-based harassment from trolls, but TikTok in some cases flagged them for bullying instead of moderating the actual perpetrators.

As a result, the burden of confronting ableist hate disproportionately falls on persons with disabilities. They mostly resort to manually reporting or blocking ableist content themselves or mobilizing others to report hateful content on their behalf, or educating users by responding to hateful comments. Many end up self-censoring, discouraged by the chilling effect induced by hostile voices.

Recommendations

For social media platforms to be truly inclusive spaces, ableism must be recognized and addressed as an online harm. Francis Fukuyama recommends Middleware, or third-party content curation services that act as an editorial layer between dominant internet platforms and consumers, without risking censorship.

For disabled social media users, this could mean ableism-specific AI moderation tools that not only remove or hide hateful content but also summarize and categorize specific types of ableist harms. Such systems could be complemented by customized design features — like content warnings or nudges tailored to user preferences and sensitivities.

However, for moderation tools to effectively address ableism, AI systems need to identify and explain disability bias. Involving disability rights advocates and interest groups in the development of our tech systems therefore becomes essential to building inclusive systems. A nuanced approach that balances moderation with community-driven efforts is crucial to making digital spaces inclusive for persons with disabilities.

Authors

Tithi Neogi
Tithi Neogi is a technology lawyer and Analyst at The Quantum Hub, where she works across platform governance, data protection, and emerging technologies. Her research and advocacy focus on integrating disability perspectives into digital policy and design, with a broader emphasis on online safety a...

Related

Perspective
DOJ’s Lawsuit Against Uber Illustrates the Limits of Tech Innovation for AccessibilitySeptember 29, 2025
Analysis
DOGE & Disability Rights: Three Key Tech Policy ConcernsMay 12, 2025
Podcast
Centering Disability Rights in US Tech Policy 35 Years After ADAJuly 24, 2025

Topics