Fighting Disinformation Demands Confronting Social and Economic Drivers
Scott Timcke / Oct 9, 2025The policy discourse around information integrity has a disconnect between democratic rhetoric and action. While authorities proclaim that “democracy is at stake,” their responses remain confined to platform-centric solutions that ignore the deeper sociological causes that make people susceptible to coordinated synthetic narrative campaigns. This misdiagnosis functions as a kind of security theater, reflecting a reluctance by policy researchers to examine how market structures, political choices, and material conditions create the fertile ground where disinformation takes root.
Information integrity as security theater
Concerned with how “technology companies based in a handful of countries…[who]… monopolize control over global information flow,” the UN launched its Global Principles for Information Integrity in mid-2024, defining information integrity as a condition emerging from an ecosystem where accuracy and reliability create trustworthy information flows. The initiative was also motivated by the concentration of information control among a few powerful tech companies. The principles “envision an information ecosystem that delivers choice, freedom, privacy and safety for all, in which people everywhere can express themselves freely and make informed and independent decisions.” This represents a positive response to the apparent crisis of ‘misinformation’ and related phenomena that has undermined UN peacekeeping exercises and election integrity worldwide in the past decade.
Yet this framing treats symptoms while ignoring causes. Across the world, reactionary groups pursue antagonistic politics, using humor online to demean opponents. Leading figures interpellate subjects for stochastic violence (as in ‘disinformation starts at the top’) while disinformation-as-a-service operates through a labor process with opportunistic intermediaries who are often it for the money, and algorithmic platform recommendation systems funnel users toward reactionary content when gamed.
Maintaining respectability in global fora requires that reactionary politics and its techniques be spoken of in couched terms, especially when center-left institutionalists are trying to build a broad-based coalition with the center-right. For this reason, there are gestures that “all sides are to blame,” for example. This is little more than security theater, as information integrity remains a unidirectional concern about reactionaries threatening the stability of the world market; no one seriously fears Trotskyist movements forming a Fifth International.
While many authorities and public intellectuals proclaim that democracy is at stake, they do not act like it. Their actions do not support their rhetoric. Their wary discussions weigh the limits of toleration for bad faith reactionary actors against the perceived costs of having independent institutions adjudicate political disputes. While building political will and a supportive base remains important, the dirty work of defending democracy online requires a deeper understanding of the classical sociological dynamics that precede information markets.
Where is the social life of disinformation?
One prevailing ‘supply side’ method for controlling disinformation focuses on targeting various actors through content removal and account bans for terms of service violations. This ‘carceral approach’ looks to build enforcement cases, oftentimes overlooking the social totality that enables disinformation to spread and take root. The key line of inquiry ought to ask, what is happening in the social lives of audiences of content and users of platforms that makes them receptive to disinformation?
A good place to start is with the consequences of neoliberal realism, with how austerity, precarity, under- and unemployment have come to shape public life in ways that contribute to what economists Anne Case and Angus Deaton have termed ‘deaths of despair.’ When institutions fail to deliver on their promises and economic insecurity becomes the norm, alternative explanatory frameworks – however flawed – become appealing precisely because they offer coherent narratives about why things have gone wrong. To use German author and poet Hans Magnus Enzensberger’s formulation, there would be no consciousness industry unless the conditions for its flourishing were present. Misinformation succeeds not because people are inherently gullible, but because it provides meaning and agency in contexts where both have been systematically eroded.
One must also be critical of policy entrepreneurs selling solutions, especially around media, data, AI, and civic literacy. If twelve years of basic schooling does not truly provide a transferable foundation for self-directed, lifelong learning then it is doubtful that short courses delivered to small cohorts will address the scale and severity of the matter. The thinking behind these kinds of technocratic solutions suggests a profound misunderstanding of the problem of disinformation. This thinking assumes disinformation spreads due to individual deficits in reasoning that can be ‘patched’ rather than collective experiences of social alienation and economic abandonment.
What counts as political communication?
The selective application of disinformation as a category reveals more about power structures than truthfulness. Why focus on elections but not workplaces, which desperately need democratization? These boundaries expose the class character of information integrity discourse. Corporate communications that downplay environmental risks, overstate benefits, or misrepresent working conditions rarely face the scrutiny applied to political content, despite profoundly impacting public welfare and daily life. As 20th century South African political history demonstrates, labor politics and democratic politics are inseparable. Limiting disinformation to electoral contexts accepts the fiction that economic power operates separately from democratic governance, a myth that protects the existing hierarchies that shape the social life of disinformation.
Additional linguistic challenges hamper the development of the field. The term ‘AI’ has become incredibly broad and imprecise, often obscuring clarity rather than enhancing it. This vagueness is not accidental; it allows both promoters and critics to make sweeping claims without accountability to specific technical realities. I agree with Tech Policy Press contributors Emily Tucker and Joseph Keller that it is more effective to speak about specific software packages and their affordances. I would counsel that policy researchers incorporate insights from social studies of software. Understanding how particular algorithms shape information flows requires examining their actual implementation, not their marketing rhetoric, even as marketing rhetoric is designed to inflate the perceived power of hi-tech firms.
Meanwhile, the terms ‘misinformation’ and ‘disinformation’ are falling out of favor due to their associations with political bias, censorship, and the oversimplification of complex debates. The constant search for new terminology reflects the field’s collective failure to address fundamental questions about who gets to define truth and in whose interests. While the UN uses ‘information integrity’ to point to wider conditions and social relations within institutions, its agencies are political targets. As it did in 2017, the Trump administration has again decided to withdraw from UNESCO, a decision that will take effect at the end of 2026. The search for new terminology continues, with ‘synthetic narrative control’ gaining traction. Although rather than endlessly rebranding the same conceptual limitations, we need frameworks that can account for how information operates within broader systems of power and meaning making.
What can we do differently?
Moving beyond security theater requires embracing ideological critique as a foundational methodology for information integrity policy research. This means shifting from “how do we stop misinformation?” to “what material and symbolic interests does information serve, and how do power relations shape what counts as legitimate knowledge?” This approach demands examining not just false information, but the entire apparatus through which beliefs become hegemonic, others verboten.
Ideological critique offers three analytical tools absent from current information integrity policy research. First, it provides established scholarly techniques for examining how seemingly neutral technical systems encode worldviews and serve specific class interests. Platform algorithms, content moderation policies, and fact-checking systems all embed assumptions about authority, truth, and social order that more often than not favor existing power arrangements. Second, it offers frameworks for understanding how dominant groups maintain cognitive hegemony: the ability to shape not just what people think, but how they think. Third, it provides tools for analyzing how groups develop counter-hegemonic consciousness, alternative meaning-making systems and their ‘hidden transcripts’. Adopting these techniques can craft better policy responses to disinformation.
The concept of habitus, for example, developed by French sociologist Pierre Bourdieu, may be useful for understanding why certain populations prove ‘resistant’ to official narratives. Rather than viewing this resistance as individual irrationality, one can analyze it as the product of embodied experiences with institutional failure, economic precarity, and marginalization. The habitus of groups that have experienced systematic abandonment generate reasonable skepticism toward official authority, creating what might at first appear to be susceptibility to disinformation, but in closer examination may be rational epistemic caution.
The dirty work of democracy online requires bringing ideological critique to bear on information integrity. This must be accompanied by structural transformation that addresses root causes. One must search for democracy at work. Additionally the field of information integrity and related policy work must return to classical social questions about representation and property relations, the politics of market structure and belief formation, especially during a moment of hegemonic transition in the world market like ours. In its current conception the field of information integrity suffers from a lack of concept-driven social science. It is time for the field to mature.
Authors
