Home

Donate
Perspective

Tools for Reporting Online Violence Are Broken. Here’s How to Fix Them

Sherry Hakimi / Aug 28, 2025

For several years now, we’ve seen leading social media platforms make dramatic cuts to their trust and safety teams, often in the name of ‘efficiency.’ Unfortunately, the results have been predictable: Harassment and online threats on those same platforms are more widespread than ever, and the platforms’ systems and resources for reporting these threats have become increasingly ineffective.

The combination of the social media platforms' broken reporting processes and deep cuts in trust and safety is a major problem, but it also presents a real opportunity: Developing a universal reporting system that introduces efficiencies that the platforms' users, operators, and shareholders alike would appreciate and benefit from.

There’s no shortage of flaws in platforms’ current systems for reporting harassment, but the most fundamental issue for users is that they are invariably onerous, retraumatizing, and typically lead to no resolution, which dissuades many people from submitting reports at all. This shouldn’t be taken as flat criticism, but rather, key opportunities for improvement.

Despite the fact that people facing threats and harassment often simultaneously receive it across multiple social media platforms, each platform has its own siloed reporting system that is disconnected from others. To make matters worse, different platforms use different terminology for the same types of threats. For example, Meta refers to image-based abuse as “non-consensual intimate imagery (NCII)” while Google refers to it as “non-consensual explicit imagery (NCEI),” just one example of the needlessly complicated processes for both users and platforms.

Users also have no reliable way of knowing which threats to take seriously, which can lead to deadly consequences. People often try to ignore or minimize harassment, and in many cases, online threats don’t manifest in real life, but a growing number of them do. According to UNESCO, 20 percent of surveyed female journalists globally reported that online threats had led to physical violence against them. In 2022, SWGfl surveyed ~150 people with lived experience of online abuse. The data shows that only one in three victims reports online abuse to social media moderators, while one in four will never report to anybody.

A heartbreaking example of online threats turning into real-world violence happened just last year to a former female Afghan Ministry of Defense official named Hanifa Shirzad, who escaped the Taliban and fled to Iran, only to begin receiving death threats via WhatsApp. She ignored them, and was then fatally stabbed in broad daylight while exiting a taxi on a busy Tehran street. As her husband told The Times: “I wish I had taken it more seriously.”

With more than 5 billion users worldwide, it’s become clear that platforms can’t quickly understand the often-dangerous situations that people are facing, let alone provide them with the support and resources they need. The platforms need a reporting system that centers on user safety and streamlines what is currently a fragmented, retraumatizing process. The potential is enormous: A universal system would reduce harm, standardize responses, save time and money, create systemic efficiencies, and generate better safety data.

Toward a universal reporting system

After personally experiencing intense, coordinated online harassment that led to real-world threats, I cofounded Pirth.org, which serves as a global trusted flagger where people can report threats they’re facing online, escalate their cases with various platforms, and quickly get personalized, trauma-informed support. After our first full year of operation, our top takeaway is simple: Reporting online threats and harassment and getting help is hard but it doesn’t have to be.

Working with users facing threats across every continent, we’ve seen firsthand that there’s an immense need for a universal reporting system that prioritizes user safety and simplifies what’s become a fragmented, onerous, and often re-traumatizing process. We’ve also seen clearly that the potential for both users and platforms is enormous; a universal reporting system would dramatically reduce online harassment and real-world threats while helping platforms save money and deliver a better, safer user experience for all of their users.

Instead of individual platforms constantly reinventing the wheels of their reporting systems with different terminology and varying levels of success, their combined efforts, data, and expertise could be combined to create a universal reporting tool that actually works. The blueprint is fairly simple: Users across platforms should be able to follow a simple process for reporting the specific threats and harms that they face, with clear paths toward resolution and without needless hurdles. At Pirth.org, we always start with the question “What does the person facing threat/harm need at this moment?” With that in mind, we designed a trauma-informed, fairly straightforward reporting form that allows the user to report any type of threat or harm on any social media platform. The form allows them to tell us the nature of the threat(s) they're facing, their perceived motivation behind the threat, the context of their situation, and documentation, including links and/or screenshots of posts, comments, direct messages, etc. At the end of the reporting form, we also ask users what sorts of support would be most helpful to them. When they hit 'submit,’ the Pirth.org platform immediately generates a Personal Action Plan that is customized not only to their profile (age, gender, location, profession) but also to the nature and perceived motivation of the threat(s). Developing something with similar ease and capabilities within the platforms will make the process simpler and faster for users, especially with regard to documentation and authentication of reports.

The operational efficiencies and cost-savings potential are enormous. A universal reporting system would serve as a natural companion for platforms’ operations and policy teams by improving their ability to triage user reports for human or automated review, while eliminating the need for each platform’s trust and safety and engineering teams to conduct user research and testing to develop a trauma-informed reporting process. As each platform figures out how to leverage AI in improving operations and content moderation, a combined effort in developing and harnessing AI to improve the moderation, reporting, and back-end review process would benefit everyone involved. Pooling of data and LLMs can help with reducing some biases; having a universal system also means there is only one portal for de-biasing efforts to focus on, instead of dozens (or hundreds). There is also a clear cost-saving benefit, especially for smaller platforms that don’t have the resources to develop a better reporting process independently.

A universal reporting system not only benefits platforms and users, but civil society organizations and researchers as well. Instead of lobbying individual platforms Facebook, Instagram, Telegram, TikTok, WhatsApp, and YouTube a universal reporting system with independent civil society oversight would enable earlier detection of repeat perpetrators, networks, patterns, and trends, and could help avoid situations like the Rohingya crisis.

In a decentralized social media ecosystem, it’s easy for bad actors to take advantage of the disconnect between platforms in order to do more harm. With a universal reporting system, with threats and harms all fed into one place with transparency, bad actors could be identified more quickly, and mitigating solutions can be implemented across all social media platforms.

Ultimately, a universal reporting system would also allow for independent, rigorous, and transparent analysis of a cross-platform database of online threats and harassment, making every platform and every user safer.

Learning from what already works

As many large platforms have made cuts to their trust and safety teams and resources, we’ve seen some gravitation toward the development of open-source trust and safety tools. The best example of this may be ROOST (Robust Online Open Source Tools), an independent platform with partners across the industry designed to provide users with a range of mix-and-match safety tools designed to protect online users and communities. After we discussed the idea of a universal reporting system, ROOST’s VP of Product, Juliet Shen, told me that their goal is “to provide open-source, accessible building blocks designed to safeguard online users and communities. A reporting component could be one such brick.”

A universal reporting process and system may sound ambitious, but implementing a sensitive, information-sharing system across numerous, distinct entities isn’t a revolutionary concept. Just look at the Common App, which is used by over 1,000 colleges and universities across the US, Canada, China, Japan, and many European countries. Run by an independent, nonprofit consortium, the app allows students to apply to thousands of participating schools through a single form, bypassing many of the typical financial and administrative hurdles that make applying to college more difficult than it should be. Unlike the Common App, which spans thousands of institutions, a universal reporting system would require buy-in from a much smaller initial number of platforms.

Still, it’s not hard to imagine resistance to a universal reporting system, which would require a rare level of coordination, cooperation, and co-development that major platforms aren’t accustomed to. There are fundamental differences in content structure, development approaches, and terms of service across platforms, making alignment challenging but achievable. Implementing such a system would involve technical hurdles and, crucially, require platforms to cede some control over the user experience. Nevertheless, leading platforms already collaborate with each other, government agencies, and civil society to tackle issues like terrorism-related content, child pornography, and other criminal activity. Concessions have been made to address these harms, and threats and harassment deserve the same level of seriousness.

There are encouraging signs of progress and precedents set for cooperation between civil society, government, and social media platforms. In May, US President Donald Trump signed the bipartisan Take It Down Act into law, criminalizing deepfakes and AI-generated imagery, and compelling the platforms to do more to confront them. There is industry-wide support for child safety initiatives like the National Center for Missing and Exploited Children’s Take It Down tool, which allows people to submit a report to help get a sexually explicit image of themself removed from the Internet. Platforms have also cooperated with StopNCII.org, which helps take down AI-generated deepfakes and non-consensual intimate images. In 2022, SWGfl StopNCII.org’s parent organization “wanted to join the dots in a victim-centred way” and so they conceptualized an initiative called the Minerva Project, which “came from a desire to give victims a platform which helps them to do this,” said SWGfl’s Revenge Porn Helpline Manager Sophie Mortimer. Although the Minerva project has not yet launched, it sets a valuable precedent and lays the groundwork for a universal reporting system.

Stopping the epidemic of online threats and harassment is not just a moral imperative; it’s a practical one, too. Tech companies, civil society organizations, and other key stakeholders have a real opportunity to build a shared infrastructure that protects users across platforms and make the entire Internet a safer place.

This can be fixed, but only if we stop treating reporting as a user burden and instead recognize it as a shared responsibility. It’s time for platforms and stakeholders to stop working in silos and start building something better, together.

Authors

Sherry Hakimi
Sherry Hakimi is the CEO of Pirth.org, a global nonprofit organization on a mission to provide people with the resources and support they need to stay safe, while advancing online safety solutions. Sherry’s career has spanned the private, public, and nonprofit sectors on five continents, including a...

Related

A Digital Crisis: Solutions to Online AbuseOctober 22, 2024
Perspective
A Safety by Design Governance Approach to Addressing AI-Facilitated Online HarmsAugust 25, 2025

Topics