Home

Donate
Perspective

Platform Convergence and the Limits of Technical Solutions to Counter Online Hate

Joseph Stabile / Apr 14, 2025

In June 2017, Facebook, Microsoft, YouTube, and the company then known as Twitter announced the formation of an ambitious cross-industry partnership: the Global Internet Forum to Counter Terrorism (GIFCT). From the outset, the consortium held that its value lay in an ability to shape an industry-wide approach to combating terrorist content online. Indeed, in a joint statement marking its first anniversary, GIFCT touted its ability to help “smaller companies who may have fewer resources and less experience tackling terrorist content,” particularly through information sharing, collaborative technological innovation, and research.

Today, GIFCT consists of nearly three dozen member companies, including platforms as diverse as Amazon, Discord, and Dropbox. Despite the organization’s growth, however, several of its founding members have deprioritized trust and safety efforts, including significant staff reductions and notable policy changes. The companies that once sought to harness their resources and technological prowess to set industry standards now appear to be in retreat.

At this inflection point for content moderation, Columbia University School of International and Public Affairs (SIPA) professor Tamar Mitts’ recently published Safe Havens for Hate: The Challenge of Moderating Online Extremism offers timely insights into the trajectory of online threat actors, voluntary platform commitments, and international regulatory regimes over the past decade. Mitts’ central finding stems from her investigation of the online behavior of more than 100 hate groups and militant organizations, ranging from the Proud Boys and Oath Keepers to the Taliban and Islamic State. Taking an ecosystem-level approach, Mitts traces user migration across platforms, concluding that this activity is “driven by their strategic efforts to build digital resilience.”

Online threat actors on the move

Scholars have long examined issues related to platform migration and the evolution of the online extremist landscape, outlining how a broad array of platforms offer unique functions to extremist users. Safe Havens for Hate, however, contains particularly valuable insights in its cross-platform analysis of user responses to deplatforming. For example, Mitts finds that deplatformed Twitter users who flocked to the less-regulated Gab were significantly more likely to post hateful content and engage with content from the militia group Oath Keepers after their suspension. More broadly, she demonstrates how extremists consciously seek to maximize their audience while maintaining the ability to deliver their intended message without fear of suspension. This analysis encourages researchers and policymakers alike to consider the second-order impacts of moderation efforts, urging a perspective that moves beyond single-platform actions.

Mitts’ research makes a convincing case that spillover of extremist content frequently occurs in the wake of a single platform’s decision to ban a particular group or movement. However, an interesting case that runs counter to this trend is that of the messaging application TamTam, which is notably not a member of GIFCT. In late 2022, TamTam removed more than a dozen channels associated with the white supremacist Terrorgram Collective after users began to migrate away from Telegram to avoid content takedowns. Outliers like this case warrant further study to understand how close monitoring and public pressure can help mitigate content spillover, particularly on less-regulated platforms.

The merits and limits of cross-platform collaboration

Mitts’ analysis goes beyond just an assessment of the threat actor trends, adding significant insight into the impact of regulatory trends over the past decade. Perhaps most relevant to contemporary policy debates is Mitts’ discussion of what she calls “platform convergence”—a shift toward similar platform policies and content moderation thresholds. Though policymakers tend to urge cross-industry collaboration to confront extremism, Mitts finds that “centralizing moderation across platforms also has costs,” warning that this convergence can “give too much power over online speech to a small number of actors.”

It is perhaps worth noting that Mitts’ primary focus on text-based social media content may undersell one positive aspect of cross-industry collaboration. In an era of live-streamed terrorism, GIFCT’s Content Incident Protocol and Incident Response Framework meaningfully help to mitigate the spread of attacker manifestos and live streams that can traumatize platform users and inspire further violence. Though inevitably imperfect at completely eliminating this form of terrorist content in real time, this proactive approach to coordination among industry, government, and other relevant stakeholders stands out as a marked improvement to the status quo of the late 2010s.

That said, Mitts’ caution reflects longstanding concern from legal scholars such as Danielle Citron and Evelyn Douek, whose respective work on “censorship creep” and “content cartels” critiques platforms’ failure to uphold transparency and accountability in their removal decisions. These arguments remain particularly salient amid two simultaneous developments: the shifting scope of actors subject to counterterrorism tools and the blurring line between private technology platforms and government authorities.

In February, the United States designated several cartels and transnational criminal organizations as both Foreign Terrorist Organizations and Specially Designated Global Terrorists. Facing public pressure from the US, Canada followed suit with its own designation of many of the same actors. The quickly evolving focus of US counterterrorism resources, however, is unlikely to stop with cartels. Indeed, recently appointed senior US officials have signaled their intent to deprioritize efforts to counter white supremacist violent extremism while threatening to wield counterterrorism tools against civil rights activists and so-called “Antifa” supporters.

Meanwhile, Elon Musk, the chairman and chief technology officer of X—one of GIFCT’s founding member companies—serves as a “special government employee” and close advisor to the President of the United States. Musk has evolved from a “private governor” of online speech and begun to wield an immense degree of public and private power. In this sense, the case of X challenges Mitts’ conception of technology companies as primarily driven “to generate profit.” Clearly, platform control also advances political objectives as well. As a government official and platform leader, Musk now possesses two potential avenues to influence industry content moderation norms that are heavily reliant on government designation lists and standards set by the largest platforms.

The future of industry collaboration

Considering the degree to which large platforms like X have historically shaped the agenda and moderation approach of a much broader set of actors, this current arrangement also raises difficult questions about the future of cross-platform cooperation. Is it possible to disentangle the views of X leadership from that of a senior US government official? How would GIFCT respond to pressure to align resources toward cartels, transnational criminal organizations, and other actors? Have institutions like GIFCT relied too heavily on the voluntary goodwill of large platforms to drive the approach of the industry writ large?

Although smaller platforms are not compelled to align their moderation policies with X (or any other company), decisions made by large platforms will have downstream consequences for the consortium. If large platforms share fewer “hashes” of terrorist content with GIFCT due to their diminished Trust and Safety capacity, this trend over time could degrade the utility of the hash-sharing database—a key component of the organization’s value proposition. Though only YouTube reports the number of its hash contributions, X’s reported departure from the GIFCT board late last year signals a decreased investment in the partnership.

GIFCT, therefore, faces the twin challenges of navigating evolving governmental definitions of terrorism while also managing the loss of a once-crucial contributor to the consortium’s theory of success.

All of this is not to suggest the futility of GIFCT as an organization—coordination serves a valuable purpose even in a fractured tech environment. It does, however, point to the inherent limits of technical solutions, particularly when premised on the voluntary cooperation of large platforms driven by both financial and political interests. Moreover, it reinforces calls for transparency surrounding the use and makeup of shared datasets, as well as continued public debate about what qualifies as terrorist content. As Mitts makes clear through her research, one thing is certain: as the industry evolves, threat actors will continue to build digital resilience and move strategically across the online ecosystem.

Authors

Joseph Stabile
Joseph Stabile is a researcher of political violence and white supremacist extremism. He previously worked as a policy strategist at MITRE, a federally funded R&D center, where he conducted research and provided strategic planning support to the US government. Before joining MITRE, he was a research...

Related

An Advocate’s Guide to Automated Content Moderation

Topics