Home

Donate
Perspective

It’s Time to Admit Consumer Education Can’t Deliver Tech Accountability

Brooklyne Gipson / Nov 13, 2025

This perspective is part of a series of provocations published on Tech Policy Press in advance of a symposium at the University of Pittsburgh's Communication Technology Research Lab (CTRL) on threats to knowledge and US democracy.

When writer, civil-rights activist, and educator Toni Cade Bambara asked, “What are we pretending not to know today?” she was identifying willful blindness as a mechanism for avoiding inconvenient truths. The question is both a pedagogical and political intervention. It is designed to disrupt what philosopher Paulo Freire called the “culture of silence,” or the social consensus among marginalized groups that remains unspoken at their expense. It calls on the disenfranchised to vocalize their latent understanding of the complex issues that plague them and demands that privileged individuals in elite circles confront their own complicity in that silence. Essentially, Bambara’s provocation asserts that we already possess the information, see the patterns, and comprehend structural inequality. The real question is, why are we acting like we don’t?

As an internet researcher studying disinformation that targets Black communities, I see this dynamic constantly in conversations about information pollution. A pervasive stereotype frames Black people as uneducated, and this bias causes people to overdetermine the role of individual ignorance in the proliferation of bad information online. At the same time, systemic failures in the US that are central to causing the problem, i.e., cuts to educational funding, sophisticated disinformation campaigns that specifically target Black people and other people of color, and the algorithms designed for amplifying bad information and harmful discourse, are neglected.

When attention is paid to the unique harms Black people face in terms of consuming bad information online, researchers and policymakers often sidestep the root causes, promoting individualistic solutions like media literacy training instead. This approach fails because the most effective antidotes are intangible, i.e., the foundational importance of social trust, narrative strategy, cultural context, and relational accountability. These are precisely the human connections and localized knowledge that current technocratic frameworks ignore, simply because they cannot be scaled, metricized, bought, or sold.

The limits of literacy, the necessity of care

While media literacy is one tactic against disinformation, it fails for the same reasons other tech solutions do: it places the burden on the individual, letting the systems that create the problem off the hook, even in its attempt to take a systems-oriented approach. We must instead center our strategy on care and collective action. This means moving beyond frameworks that blame the user for failing to navigate a polluted information ecosystem, and instead building solutions that start with the needs of the least privileged. By serving them first, we create the most robust, equitable, and effective solution for everyone. This shift might entail moving beyond one-size-fits-all curricula. For instance, one approach involves developing resources in partnership with community elders that specifically name and deconstruct the historical tropes—like “welfare queens” or criminal stereotypes—most frequently weaponized against them. This builds defense through contextual understanding, not just technical skills.

This shift enables us to promote critical media literacy that fosters a profound understanding of history, politics, and culture, rather than just discrete skills. It forces us to stop appealing to the moral sensibilities of tech companies whose business models are often aligned with the spread of disinformation and whose political lobbying ensures they remain unregulated. We know that tech companies, like Facebook, have been implicated in electoral interference; yet, we accept their ineffective stopgap measures, such as labeling false information on social media. The solution is not to train users to better survive a broken system, but to reimagine and rebuild it.

The human cost of algorithmic prey

The case of Black influencer Anthony Harris, which I explored in Dialogues on Digital Society, is a troubling example. Harris was used as a vector of disinformation—not a villain, but a victim. He was “algorithmic prey”: a Black man ensnared and amplified by a system designed to launder white-supremacist tropes by exploiting the veneer of a Black voice for credibility. His story reveals a cruel pattern of manipulation: the system weaponizes marginalized voices to legitimize the very ideologies that oppress them.

As scholar David Nemer notes in his work on Brazil, this is a global problem linked to inequality. “It is time to stop treating disinformation as a user behavior problem,” he writes, “and start seeing it for what it is: a structural, infrastructural, and systemic problem engineered by design.”

We need a hybrid model that combines technology with human-centered, community-led efforts. For example, instead of expecting individuals to debunk a coordinated campaign alone, rapid-response digital support teams, organized by trusted institutions, could be funded to quickly identify disinformation and flood the zone with accurate, culturally competent counter-messaging through established channels, such as WhatsApp groups.

To realize that vision, we must first undertake the work of consciousness-raising by posing Bambara’s question to ourselves, our research labs, and workshops, among other venues.

In the context of the current crisis, characterized by the strategic erosion of civic integrity, the reckless deployment of generative AI, and a coordinated campaign to dismantle institutions of oversight, the question is: what truths are we actively avoiding?

Authors

Brooklyne Gipson
Brooklyne Gipson is an assistant professor of Journalism and Media Studies at Rutgers University, New Brunswick, in the School of Communication and Information. She is an interdisciplinary scholar whose research areas include digital and social media environments, Black feminist digital/technology s...

Related

Perspective
The United States is on the Cusp of a Digital Dark AgeNovember 10, 2025

Topics