Home

Prevention and Management of Lawful but Awful Content Moderation in XR Platforms

Angela Kim, Pablo González Mellafe / May 10, 2024

Telling people to drink bleach in order to combat COVID-19. Widely circulating erroneous election fraud claims. Potential use of privileged information by United States Congress members. Sexual harassment in virtual reality.

One thing these seemingly disconnected instances have in common? Each is a form of “material that cannot be prohibited by law but that profoundly violates many people’s sense of decency, morality, or justice” – or, in other words, “lawful but awful” content.

This kind of content is particularly difficult to moderate. While the public has usually been concerned about these kinds of issues in traditional digital platforms (the NetChoice cases currently before the Supreme Court are a proof of the fact that this is a live legal and political issue), the discussion about how to regulate lawful but awful content in virtual, augmented, and mixed reality environments – extended reality (XR) – offers us an opportunity to shape the foundation of an underdeveloped, fast-evolving field of XR content moderation before the government is called to step in with ex-post legislation.

Lawful but awful content is legally permissible, but can be significantly harmful in XR technologies, especially in virtual reality (VR). In 2022, the Center for Countering Digital Hate found that users in Facebook’s Metaverse experienced “abusive behavior every seven minutes,” including bullying, exposure to sexual content, racism, threats of violence, and harassment. The level of immersion, resulting in a heightened sensory load and psychological proximity, leaves a more visceral impression on the user than the experience in traditional digital platforms. One victim of attacks in Metaverse “was surprised to feel her real heart racing in her chest.” Indeed, as one headline put it, “Experts say the immersive nature of virtual reality can make online attacks feel real.” We say that the same applies to awful situations.

Naturally, content moderation issues are not easy to address. In the US, subject to a few exceptions, Section 230 of the Communications Decency Act provides platforms broad protections from liability for content posted by third parties. This has allowed platforms to use their discretion to moderate content that is not expressly illegal, without fear of liability, an ability not often held by governments. Therefore, we believe the initial phase of moderation of lawful but awful content in XR should focus not on government regulation, but on platforms and their distinct abilities to regulate technically legal but undesirable activities. Such ideal moderation of lawful but awful content relies, at a minimum, on the complement of two measures: (i) prevention through technology design; and (ii) community moderation. When the application of prevention falls inadequate, moderation through community norms and guidelines offers another apt mechanism.

Prevention of Lawful but Awful Content: Moderation Through Design

The content we access in digital environments, as well as the behavior of people in those environments, can be efficiently regulated in advance through the specific architecture or design of each technology, or, as the legal scholar Lawrence Lessig states in his book, Code 2.0., "how it is built." Indeed, Lessig argues that “architectures constrain through the physical burdens they impose.” Thus, architecture or design consists of both: hardware and software. It is in this second aspect (the software design) where we believe developers can take steps to protect users from lawful but awful content. This approach has two advantages over traditional content moderation techniques.

The first advantage is that it is preemptive. When regulating through design, the technology itself prevents lawful but awful content from occurring, in contrast to the reactive traditional approach of content moderation, where the moderators take measures after the harm. The second advantage is efficiency. At least, more efficient than traditional content moderation techniques. In fact, the big problem with lawful but awful content is that it is difficult to detect and sanction. Defining prevention mechanisms in the design makes potential moderation interventions, if any, more targeted, because the biggest part of potential situations is addressed by the design. In other words, you don’t need moderators to take the difficult decision to block or report the content as frequently, because the design of the platform reduced the likelihood of a lawful but awful event.

This approach is especially useful in child online safety matters. For example, today on most traditional digital platforms, adults can interact with minors, with no more supervision than perhaps the eventual detection by content moderators. This is not illegal, of course. But how about when an adult is having a legal but inappropriate interaction with a child? Or what if the child has access to information or activities which are not ethically recommended for their age? While lawful, this is (or can be) still harmful. What if this happens in XR, where this adult-child interaction occurs in a much more immersive way? How can companies establish prior moderation barriers in the digital architecture to prevent these situations? There are various possibilities in the design of systems. For instance, companies can design age-gated servers separating adults and kids, potentially eliminating such interactions.

Other examples can be related to those situations where people generate inappropriate content, even close to harassment, hiding their identity under a virtual skin. A solution through design would be to establish a digital identity that must match the real identity (verified photo or biometric data requirement, for example), establishing as a requirement to interact with other people, to disclose your identity (not only the skin). In the same way, platforms that operate based on blockchain technology, because of that design, decrease the chances of awful situations, since the information is much more public and traceable. It is important to note that these kind of situations about age and identity are contentious issues today in social media environment and is possible that in XR environments the tradeoffs are even more severe in favor of taking extra steps.

Of course, this role of designer is not just for XR platform developers and operators: it can be complemented by requirements from governments, which can establish regulations that promote certain baseline characteristics or that establish certain technological architecture requirements for a given XR platform to be able to operate in their respective jurisdictions.

Managing Content Moderation: Community-Reliant Moderation

In the many cases in which cannot be regulated through design, developers must consciously prioritize the task of shaping user norms, through the creation of community standards and codes of conduct, and making these play a more central and guiding role in content moderation. Reddit is a key example of this community-reliant, hybrid content moderation approach. Reddit has its own global content policies with a centralized team of employees that enforces them, prohibits illegal content, and “make[s] editorial decisions about where to draw the line”. Reddit also gives subreddit moderators near-total autonomy and editorial discretion. These moderators personalize content policies to create subreddit-specific standards. Moderators have a main role in the management of the majority of Reddit’s content.

Society has a relevant role to play in the management of this type of legally permissible content, since while it may remain legal, society's margin of tolerance is a relevant input. Like Reddit, platforms venturing into virtual reality should first adopt its own code of conduct which sets minimum standards on which volunteer moderators add and personalize to create community-specific standards. Currently, Meta’s Metaverse already has a higher-level code of conduct and allows creators to make membership-based communities “where like-minded people can come together and enjoy a shared experience.” Creating these “members-only” spaces is more optional than ubiquitous. These worlds are not moderated by Meta’s staff, and limit membership to 150 members per world. Creators must approve membership into these worlds, and members designated as “admins” do the bulk of content moderation within the closed communities. The creators can hide the world from others, and once they choose to, the designation cannot be altered.

To accommodate the diversity of interests and activities in the Metaverse, we need an even more multi-layered approach in shaping norms than Reddit’s. At the highest level, virtual-reality platforms must espouse and enforce minimum standards or codes of conduct that reflect the norms and values the platform wants users to abide by. As soon as a user uses the platform for the first time, their headset should inform the user of this code of conduct, outline its set of guiding values, and any consequences for violations. Secondly, as has been suggested, the biggest players in existing virtual reality platforms like the Metaverse should collaborate to implement a collective set of codes of conduct, guiding norms, between other measures.

Lastly, if virtual reality becomes commonplace, so too should communities centered on shared values and interests with clear procedures. In these more private, user-driven spaces, creators of virtual worlds should be required to (i) submit a code of conduct based on shared values aligned with the platform code; (ii) approve of both by a platform-hired moderator and (iii) manage regular running updates on violations and moderation efforts. While this self-policing approach may have its downfalls, in order to cultivate a real, healthy, and functioning society on the “next iteration of the internet,” developing norms is essential: “Because norms underlie the other modalities and shape them, they can act as an exogenous regulator to balance modalities and avoid failure.”

Will this be the path chosen in the future? We hope so. With the typical open questions that accompany the advent of new technologies, we conclude that regulation through design and moderation by the community can reduce lawful but awful content and behavior in XR.

Authors

Angela Kim
Angela Kim is a third-year law student at the UCLA School of Law, where she served as Co-Chair of the Asian Pacific Islander Law Student Association, Comments Editor of the UCLA Law Review and Co-Chief Articles Editor of the Asian Pacific American Law Journal. She has experience working in various f...
Pablo González Mellafe
Pablo González Mellafe is a Chilean lawyer with a law degree granted by the Pontificia Universidad Católica de Chile (PUC, 2016). He has an LL.M in Regulatory Law granted by the same University (2022) and also has an LL.M. with a specialization in Media, Entertainment, Technology & Sports & Policy g...

Topics