How to Rethink Regulation with Prosocial Design
Lena Slachmuijlder / Nov 21, 2025Lena Slachmuijlder is senior advisor for digital peacebuilding at Search for Common Ground, a practitioner fellow at the USC Neely Center, and co-chair of the Council on Tech and Social Cohesion.
Across much of the world, digital platform regulation is stuck, or absent. In many countries, platforms face pressure from governments, civil society, and the media about harmful or illicit content, but are rarely held responsible for the design of their products. This disconnect is particularly urgent in conflict-affected regions, where unchecked harmful content can easily trigger real-world violence. In many parts of the world, this isn’t just a theoretical policy debate—it’s a life-or-death issue.
Most countries still lack any legal framework to govern how platforms are built, how they influence behavior, or how they shape public discourse and wellbeing. And often, there is real concern over ways in which giving governments more say over digital platforms can lead to censorship and otherwise threaten freedom of expression. But an emerging global consensus may offer a path forward. Regulators and experts are beginning to recognize what research confirms: that online harms are not just caused by “bad users” or “bad content.” They are the predictable outcomes of platform designs optimized for attention, engagement, and data extraction.
That’s the foundational insight behind Prosocial Tech Design Regulation: A Practical Guide. Produced by the Council on Tech and Social Cohesion and shaped by 20 contributors and reviewers from Asia, Africa, Latin America, North America, and Europe, the guide offers a new blueprint for platform accountability developed in collaboration with organizations including the USC Neely Center, the Integrity Institute, TechSocietal, GoodBot, Search for Common Ground, Build Up and the Prosocial Design Network. It draws lessons from more than a dozen policies and regulatory texts—from Minnesota to Indonesia, and from the UK’s Age-Appropriate Design Code to the EU’s Digital Services Act.
From content battles to design fixes
The guide is informed, in particular, by consultations I conducted across Africa and Asia over the past year. Several workshops were held with civil society and government stakeholders in the Sahel over the last 18 months. A zero draft of the guide was shared with policymakers and digital rights stakeholders at the UNESCO Regional Conference on Information Integrity in Cabo Verde attended by 13 West African governments; at the Online Safety Forum in Lagos, co-hosted by TechSocietal; and at the UNESCO ASEAN Multi-Stakeholder Platform Governance Forum in Bangkok. These gatherings surfaced not only shared concerns, but shared aspirations for a new model of regulation that supports both safety and rights.
The guide proposes five concrete interventions that regulators can adopt now—policies that don’t require surveillance, don’t rely on speech policing, and don’t compromise freedom of expression. Instead, they govern the underlying mechanics of the platforms themselves.
1. Ban Addictive and Manipulative Design
Features like infinite scroll, autoplay, and nudging loops should be off by default—especially for children. Prompts to pause, limit usage, or reflect can help mitigate compulsive engagement. These are simple, observable changes that can be tested and enforced.
2. Default to privacy and safety for children
Accounts for minors should start private. Messaging from strangers should be restricted. And new features should require child impact assessments before launch. This aligns with the work of organizations like the 5Rights Foundation, which advocates for a child's right to thrive in the digital world through design-based solutions. These steps are already common in certain jurisdictions; the guide aims to globalize them.
3. Reform recommender systems for long-term value
Recommendation engines should offer “better feed” modes that promote quality, diversity, and long-term value—not just engagement. Algorithms should be open to audit, documented in plain language, and adjustable by users.
4. Require transparency and testing
Major design changes should be preceded by testing, evaluation, and transparency. Public change logs should be standard practice, allowing researchers, regulators, and the public to track evolving risks.
5. Use real user experience to guide policy
Success should be measured through users’ experiences. Regulators should convene rolling, privacy-safe surveys—tracking time well spent, abuse, polarization and well-being. These feedback loops can guide enforcement, platform improvement, and public trust. But their value goes further: by collaborating with academics and civil society in collecting and analyzing the data, such data helps build shared understanding of what health and harm looks like online. It creates constituencies of support for prosocial regulation—among parents, educators, youth, and civic actors—who see clearly how harms manifest across society: from children to women and girls, and others vulnerable to discrimination or exclusion.
The equity question: why safer for some?
One focus of the guide is on child online safety, which is a concern in many regions of the world. For instance, the African Union’s Child Online Safety & Empowerment Policy calls for national action plans, tying platform duties to design-level safeguards and transparency. But in nearly every consultation I conducted in Africa and in Asia, the same question came up: Why do platforms appear to be safer for children in some countries than others?
In the United Kingdom, thanks to the Age-Appropriate Design Code that was enacted in 2021, 128 design changes were made by platforms. Autoplay is turned off for children, privacy is enforced, and notifications are restricted at night. But in many lower-income countries, these protections don’t exist—leaving children exposed to frictionless contact with strangers, high-speed recommendation loops, and algorithmically amplified abuse.
It would be unthinkable to sell cars with seatbelts in one country and without them in another. Yet that’s precisely what’s happening with digital products today. This global disparity is not just a safety issue—it’s a justice issue. Civil society leaders in West Africa, Southeast Asia, and Latin America have echoed this concern in regional forums: why should the wellbeing of users in their countries be worth less?
Platforms don’t have to wait
Not all the changes proposed in the guide require laws or mandates. Many are technically feasible today. Pinterest, for instance, has already implemented design adjustments—limiting engagement-based dynamics for minors—as part of its commitment under the Inspired Internet Pledge. These voluntary reforms show what’s possible when wellbeing becomes a design priority.
The challenge is scale. Without public pressure, most companies will not take these steps globally. That’s why regulation, civil society advocacy, and cross-border collaboration must be coordinated.
Governance is a shared responsibility
While this guide is aimed at regulators, it embraces a wider vision: governance must be a multi-stakeholder effort. This insight is reflected in global frameworks like UNESCO’s Internet for Trust Guidelines for the Governance of Digital Platforms, OHCHR’s human rights guidance for platform governance, and UNDP’s Information Integrity Issue Brief, which all emphasize the importance of collaborative governance. Regulators, civil society, researchers, and yes—even platforms—must shape the rules together.
Africa isn’t starting from scratch. The continent boasts frameworks like the Abidjan Declaration, Rwanda’s Child Online Protection Policy and the African Union’s Child Online Safety Policy. The guide proposes to complement these efforts with an African Digital Experience Observatory: a regional initiative where academics, user groups, policymakers, and companies collectively monitor harm, correlate it with design features, and surface real-time user insights.
This type of collaborative approach is already gaining traction. In discussions from Bangkok to Lagos, the consistent theme was clear: a deep concern about declining platform accountability, especially around algorithmic transparency and data access—but also, a shared desire to move forward together, with rights and safety as common ground.
The guide builds on this progress—not by copying external models, but by offering adaptable language, checklists, and policy clauses that reflect both global standards and local realities. Prosocial tech design regulation doesn’t ask regulators to police speech. It invites them to address something more fundamental: how digital platforms are built, and how those choices shape our public life.
Prosocial Tech Design Regulation: A Practical Guide is also available in French.
Authors

