Home

Donate
Analysis

Moderating Cross-Platform and Cross-Instance Abuse on Decentralized Networks

Kaustubha Kalidindi / May 28, 2025

Shutterstock

There has been a marked increase in interest towards decentralized platforms as a response to shifting ownership or positions on key functions such as moderation by CEOs of centralized social media platforms, and is motivated by public criticism towards the centralization of power on them. Whether centralized/decentralized, platforms are required to moderate, and especially so when they host a substantial global user base. The problem of moderating illegal and harmful content remains even as users’ shift to decentralized platforms (Mastodon, Bluesky, Threads, and Pleroma) continues. The growing global user base raises new questions about both technical and social aspects of moderation.

Decentralized social media platforms (including Fediverse platforms) communicate through the ActivityPub/AT protocol (a process called federation). Moderation on these platforms (e.g., Mastodon, Bluesky, Pleroma) takes place at an instance level, and a user is bound by the guidelines of the instance through which they join a decentralized platform. Instances/servers are local units of moderation with their own rules and governance mechanisms. They are run by moderators/server admins, who make moderation decisions on an account level (requiring information from a user to permit them to join the instance) and on a policy level (through rules governing their activities on the instance). As Erin Kissane and Daniel Kazemi state in their report on Fediverse governance, “vibes matter a lot” when determining who can join an instance. Moderation on the Fediverse can be tricky— there aren’t as many resources, moderators/server admins are often overburdened, and norms and standards amongst instances may frequently conflict.

In this piece, we look at a specific form of chain-of-posts abuse (i.e., abusive posts across platforms abuse/across instances) to make the case that moderation on decentralized platforms must be improved as more users explore and shift to them.

The known challenge of cross-platform/cross-instance abuse

Cross-platform harassment/brigading has been a longstanding issue, as is often highlighted by women journalists who are disproportionately targeted. Drawing from the broader discourse on cross-platform brigading tactics adopted by bad actors, we must consider an aspect of cross-platform abuse, where abusive content is posted on multiple platforms in a coordinated manner. Users who are subject to such abuse are required to report abusive content on all of the platforms where the content is proliferating. In doing so, on each platform, they are required to understand the platform-specific policy, keep track of reporting mechanisms, and follow through on the process for content removal/account action on perpetrators. This is further amplified in the case of decentralized platforms with instance-level policies and reporting mechanisms. Users’ ability to switch to different instances without losing their history, a key benefit of decentralized platforms, poses a significant challenge for moderation. Cross-instance abuse, where content proliferates across instances within a decentralized platform in a coordinated manner, is met with difficulties in reconciling conflicting (instance) policies.

For example, centralized platforms have policies against recidivism of content that the platform has acted on and removed. With decentralized platforms, it depends on the instance the user is in. Individuals will end up reporting content on each instance, and may also have to report it multiple times on an instance if it does not have a policy against recidivism. Even if one post is taken down, the issue persists as long as copies of the content exist elsewhere, whether on other instances or platforms. Users on decentralized platforms, therefore, face the additional fatigue of navigating a complex network of policies and actions across instances. Thus far, attempts have been made to enable industry collaboration for content that is obviously harmful, aka CSAM and terrorist content, through initiatives such as the Lantern program and GIFCT. While industry collaboration has improved somewhat, there have been valid concerns about platform convergence.

Where cross-platform/instance abuse occurs on decentralized platforms, improving content reporting processes to support survivors and considering options for signal-sharing becomes pertinent. Current moderation support to moderators/administrators of instances has included actions such as banning individual users, or deleting or restricting the visibility of individual content and accounts. However, they are considered rudimentary. While defederation is an option available on decentralized platforms, it is a ‘radical’ option. At present, recourse processes on decentralized platforms are still a work in progress. With users moving towards decentralized platforms, increasing support for moderation and reporting is crucial for users to have a better browsing experience.

What is needed to improve fediverse moderation

Current reporting mechanisms against coordinated abusive content are burdensome for users (as we found in the process of creating Uli). On decentralized platforms, server admins are confronted with large-scale cases of abuse. They can be improved by implementing tools that enable better moderation on instances (some of which have been explored here and here). More suggestions have been made on ways to secure federated platforms, such as information sharing processes between moderators/server administrators. Implementing transparency, both in tooling and process, is key. Exploring open source tooling and understanding governance processes in existing open community projects is a step towards such implementation. It would also be beneficial to have guidance documents on instance policies to support moderators/server administrators in instituting/implementing them to ensure better moderation practices without taking away the agency of federation or sliding towards more centralized moderation policy frameworks.

Further, due to the present nature of decentralization, we may not be able to remove all copies of content across all instances (especially where they do not consider it to be a violation of instance-level policy), but we do need a framework to help moderators reconcile conflicting instance policies when faced with cases of cross-instance abuse. On a broader level, having standardized reporting methods across instances and networks and publicly available documentation on reporting mechanisms is necessary in reducing the burden on users subject to such abuse.

Authors

Kaustubha Kalidindi
Kaustubha Kalidindi is a lawyer currently working as Legal Counsel at Tattle Civic Tech, where she is also the Program Manager of Uli, a project focused on building community-led responses to tackle online abuse. Her expertise encompasses platform governance, AI safety, and open source innovation, a...

Related

Governing the Fediverse: A Field StudyOctober 20, 2024

Topics