Policy Directions on Encrypted Messaging and Extreme Speech
Anmol Alphonso, Sérgio Barbosa, Cayley Clifford, Kiran Garimella, Elonnai Hickok, Martin Riedl, Erkan Saka, Herman Wasserman, Sahana Udupa / Aug 22, 2025Encrypted messaging platforms are significant in the large-scale circulation of “extreme speech”—or what Udupa defines as “speech acts that stretch the boundaries of legitimate speech along the twin axes of truth/falsity and civility/incivility.” Messaging services are deployed to entrench hierarchies, legitimize false information and conspiracy theories, and weaponize online discourse, while they also offer opportunities for civic mobilizations, journalistic practices, and wide-ranging social interactions.
Highlighting the harms of messaging platforms, recent regulatory efforts have sought to break open encryption by creating backdoors.
- The European Commission's ProtectEU initiative aims to give law enforcement legal access to encrypted online data. The digital rights community has resisted the proposal, arguing that weakening end-to-end encryption will ultimately undermine cybersecurity. On June 24, 2025, the European Commission presented a roadmap outlining a plan to ensure law enforcement can access necessary data. Among other things, the roadmap commits to developing, by 2026, a separate Technology Roadmap on encryption that will identify and assess solutions that enable lawful access to encrypted data by law enforcement, while protecting cybersecurity and fundamental rights. (Whether such solutions could exist without breaking the promise of encryption is an open question.)
- In 2024, the UK government expanded its surveillance powers under the Investigatory Powers Act to include issuing “Technical Capacity Notices,” which it handed to Apple in 2025, demanding the company provide law enforcement with access to encrypted data. Notably, Apple withdrew its Advanced Data Protection (ADP) feature from the UK market to protect user privacy. The UK Online Safety Act requires online messaging platforms to ensure that they can apply accredited technologies on encrypted channels if ordered to do so. Companies and multiple civil society organizations pushed back on both regulations during their draft stages, stating that such requirements would undermine encryption. In 2023, the UK government admitted that the “technology needed to securely scan encrypted messages sent on WhatsApp and Signal does not exist.” This week, the UK government walked back its demand, reportedly following pressure from the Trump administration.
- In Brazil, authorities arrested WhatsApp executives for refusing to provide user data, and the messaging service was banned from 2015 to 2016, illustrating how governments might use service bans and actions against platform employees as ways to enforce compliance.
- Similar strategies are observed in Uganda and Zambia, where access to online platforms was blocked during elections. Countries around the world are expanding legal tools and actions to limit encryption.
Platforms have responded with legal challenges to governmental measures, while simultaneously curtailing responsible content moderation measures and resources needed in implementing them. On January 7, 2025, in a statement that sent shockwaves to fact-checkers and civil society groups around the world, Meta announced that it would remove fact-checkers across its services in the United States, replacing them with a crowdsourced system based on user-driven consensus known as “Community Notes.” It also announced its intention to simplify content policies, including removing restrictions on hate speech. While these measures do not directly impact the company’s encrypted messaging services, they have set a precedent for further reduction in platform oversight. Incidents of violence linked to rumors, disinformation and conspiracy theories on encrypted messaging have stressed the need for urgent policy actions.
With WhatsApp—the world’s largest encrypted messaging platform—as the primary focus, we argue that existing debates around regulation, moderation, and policy need to address the broader political ecosystem of extreme speech and disinformation as well as measures that account for contextual realities. We caution against indiscriminately targeting encryption and suggest that such measures can undermine safety and security of encryption for ordinary users. We also highlight grave issues in platform measures and content moderation practices. We offer several recommendations to make online encrypted messaging platforms safe and secure for users as well as rooted in international human rights principles and the protection of democratic values.
A grounded understanding of the challenges posed by WhatsApp is the first step toward context-sensitive policy and regulation.
Key Challenges
While the technical architecture and governance of online encrypted platforms influence the online space, they by no means determine how encrypted platforms are used. Long-standing structures of power, social habits, and political cultures are intertwined with technological architectures and corporate policies, resulting in what Udupa and Wasserman define as “lived encryptions.”
Contradictions
Although the technological features of WhatsApp promise privacy and secure communication, the actual use and applications on the ground are suffused with contradictions. The promised privacy of the encrypted service is swiftly overturned by authoritarian and surveilling governments when they intend to. For example, in Cameroon, as Schumann shows, state actors routinely seize phones from suspected Anglophone dissenters to inspect data. Such explicit measures do not require sophisticated techniques of breaking open encryption. In conflict situations as well as ordinary contexts of law-and-order enforcement, the safety of a WhatsApp conversation is not a taken-for-granted condition because of extra-legal pressure tactics. Incidents of coercion have been reported in India, where local police have been accused of using extrajudicial tactics to coerce people to reveal their private WhatsApp chats.
Family and trust-based networks
WhatsApp’s influence in Global South contexts has emerged from the deep inroads the platform has made into local community networks, family groups, and social relations seen as trustworthy. Saka observes that WhatsApp is seen as more familial compared to other social media platforms in Turkey. Political actors have expanded campaign activities to WhatsApp groups to gain “organic” influence. Describing them as “deep extreme speech,” Udupa shows that they contain “community-based distribution networks and a distinct context mix, which both build on the charisma of local celebrities, social trust, and everyday habits of exchange.” They weave exclusionary messages with good morning greetings, religious hymns, local issues of water supply and other socially vetted and existentially relevant content.
Microtargeting and cross-media manipulation
Microtargeting occurs when WhatsApp messages are aimed at small groups, through what Evangelista and Bruno identify as a centralized structure “built to manage and to stimulate members of discussion groups, which [are] treated as segmented audiences.” This process creates complex flows of information that are germinated and fertilized across different WhatsApp groups and social media platforms. Garimella and colleagues have tracked how WhatsApp groups are linked to other social media platforms for political propaganda in India.
Fact-checking on WhatsApp
Due to encryption, fact-checkers often cannot find disinformation or extreme speech on the platform themselves unless examples are sent to them by users. Another challenge is that users of WhatsApp tend to trust the information they receive from friends, family, or colleagues and are therefore not always likely to verify or question the information they receive or send it to fact-checkers for verification. Wasserman and Madrid-Morales show that young users hesitate to correct false information coming from elders on WhatsApp groups because of a sense of respect. A further problem is that once information is verified or debunked by fact-checkers, it does not always reach those who saw the original content and may still be unaware of its problematic nature.
Furthermore, not everyone will act on verified information, even if they receive it—especially if the original false information has a stronger emotional appeal. Users may share false information if they are under the impression that doing so may be helpful to those in their networks and communities. While several fact-checking organizations have set up tiplines and other services for this purpose, practical considerations limit the potential impact of these efforts. “Zombie claims”—false information that will not die no matter how many times it has been debunked previously—are a major challenge.
AI generated content
While AI technologies are being explored for fact-checking and automated dissemination of prosocial narratives, the potential impact of generative AI on social media platforms, including encrypted messaging platforms, is becoming increasingly evident. As the technology becomes more accessible and user-friendly, individuals and groups with limited resources can create high-quality content that rivals that of well-funded organizations. This democratization of content creation could lead to a more diverse range of voices and perspectives on the platform. However, it also raises concerns about the spread of disinformation and extreme speech, as malicious actors may exploit the technology for their own agendas.
Recommendations
Platform governance
Governments should not use spyware and should ensure surveillance practices and other measures towards platform accountability adhere to international human rights standards, including the principles of necessity, legality, and proportionality. Governments can commit to principles such as the Necessary and Proportionate Principles and the Freedom Online Coalition Principles on Government Use of Surveillance Technologies. At the same time, existing legal frameworks that provide remedies should be strengthened to ensure due process for content removal and other moderation actions.
In line with the United Nations Guiding Principles on Business and Human Rights, platforms should conduct ongoing human rights due diligence of their services across the markets they operate in to understand and address emerging risks to human rights in different contexts. Encrypted messaging platforms should participate in applying a contextually responsive industry-wide code of conduct grounded in international human rights principles.
Trust and safety and human rights play important roles in developing and enforcing Terms of Service and content policies on platforms. Platforms should ensure they have robust teams in place that are funded and supported.
Metadata analysis and user reporting
Rather than requiring content moderation that would undermine encryption, governments and platforms should explore interventions that do not undermine encryption, such as metadata analysis (information on sender, receiver, type and size of files shared etc.), including with the use of machine learning, and developing strong user reporting mechanisms in place to identify and address online harms.
Digital influence operations
While user reporting infrastructures should be improved, organized disinformation campaigns that misuse and weaponize user reporting to overwhelm platform systems are not uncommon. Political weaponization of WhatsApp channels and groups, microtargeting and segmentation, coordinated manipulation, and gender-based violence are constantly evolving on encrypted messaging platforms. Riedl and colleagues show that women and queer journalists experience "infrastructural platform violence on WhatsApp" in Lebanon. Multiple stakeholders need to collaborate to address the vast networks of extreme speech and disinformation that commercial political consultants, political parties, and state actors have created in encrypted messaging platforms, including WhatsApp, through the use of grey networks, clickbait operators, and digital influencers.
Following an assessment of systematic risks that arise from manipulative digital influence operations, platforms should implement risk mitigation measures. Other measures include ensuring transparency in election expenditure, regulation of campaign finance, professional code of conduct and co-regulatory models for digital influence operations.
Research
Governments should develop legal frameworks for researcher access to data, including data donation initiatives. Platforms should provide researchers access to viral content—specifically, content that has surpassed a predefined exposure threshold, such as messages labeled as “forwarded many times.” This access could be facilitated through a public platform, empowering researchers and journalists to analyze and understand the dissemination of content on WhatsApp.
Fact-checking
Online encrypted platforms and the donor community should support fact-checkers’ work through continued and strengthened collaboration. Online encrypted platforms should develop dedicated fact-checking channels or provide civil society organizations with the means and access to do so. Such channels can share fact-checks, media and information literacy materials, and credible updates during critical events such as elections.
Responsible AI use
Platforms/companies should support fact-checkers in leveraging AI to develop and share easily understandable and consumable fact-checked material, including through funding and technical expertise. Companies should invest in developing AI models that can work in multiple languages, especially minoritized languages, and provide community moderators and fact-checkers with free access to such models. An AI-enabled reporting mechanism can be integrated into platforms for flagging harmful content in multiple languages.
Conclusion
At a time when platforms are rolling back trust and safety protocols, we call for stronger platform governance and content moderation while also cautioning that removing encryption is not a solution to address extreme speech and disinformation. We recommend a contextualized approach to the governance of online encrypted messaging services, addressing different stakeholders, challenges, and opportunities. Multiple stakeholders, with the support of UN entities and other multilateral agencies, should focus on finding whole-of-society solutions to online harms and challenges. This means working with relevant expert groups, civil society, and the technical community to develop and implement technical and nontechnical solutions which are lawful, necessary, proportionate, and informed by expert opinion.
The full policy report is available here. The book, WhatsApp in the World: Disinformation, Encryption and Extreme Speech (New York University Press, 2025), can be accessed here.
The research group on encrypted messaging and extreme speech (2024-2025), which developed the policy report, was supported by the Center for Advanced Studies, LMU Munich. Sahana Udupa is the corresponding author.
Authors








