Home

Generative AI Developers Should Commit to Free Speech and Access to Information

Jordi Calvet-Bademunt / May 7, 2024

Tada Images / Shutterstock.com

“When we started this work, we were curious. Now, we have real concerns.” The CEO of the Competition & Markets Authority (CMA), the British antitrust regulator, was referring to the competition risks her team has identified in the foundation models industry. Foundation models are a type of generative AI. Popular models include OpenAI’s GPT-4 and Google’s Gemini.

Generative AI is becoming more widespread, and there is a real risk that it will become controlled by a handful of companies, just like in other digital sectors. The Federal Trade Commission (FTC) in the United States and the European Commission in the European Union are also analyzing competition risks in generative AI. Naturally, antitrust regulators are concerned about the economic implications of this situation.

But, limited competition can have adverse effects well beyond the economy. A concentrated and homogenous generative AI industry can also be pernicious for freedom of expression and access to information. It would mean that just a few AI providers can decisively influence the type of information millions of users create and access. It would be a problematic outcome if, for instance, Google’s Gemini steered Gmail users to draft messages that favored specific information and viewpoints, while limiting or refusing assistance with other perspectives. Or if Microsoft’s Copilot did the same on Word, or Meta’s AI shaped what messages users wrote on its platforms.

While the future of generative AI is still unclear, and policymakers still have time to spur a competitive marketplace, we should prepare for the possibility that a few major players will dominate it. In such a consolidated marketplace, it is paramount that these dominant companies develop approaches and policies that align with human rights and, in particular, commit to freedom of expression and access to information.

Balancing Free Speech and Preventing Harm

All information-sharing companies, like social media companies or AI providers, need to balance the right to freedom of expression and access to information with safety, equality, and other interests. Inevitably, this creates tensions. When millions use a company’s service and have few alternatives, these tensions are even more intense and relevant to society.

These tensions have already emerged in generative AI. Google made news worldwide because its chatbot Gemini generated images of racially diverse characters and people of color in response to prompts requesting images of Nazi soldiers and other white historical figures. Adobe Firefly’s image creation tool faced similar issues. These outputs led some commentators to complain that AI had become “woke.” Others suggested these issues resulted from faulty efforts to fight AI bias and better serve a global audience.

Such contentious debates, of course, are not new. For almost two decades, a handful of social media companies decided what acceptable speech on their platforms was, shaping the public discourse for billions of people. This level of influence by the platforms has generated significant debate about when and how companies should moderate users' speech. The Israeli-Hamas war provides a clear illustration of this debate. After Hamas’ heinous attack on October 7, many civil society organizations warned about “a surge in online antisemitic hate” on social media. They called on the platforms to better enforce community standards and other measures. However, other organizations, including human rights organizations, pointed out how the platforms’ responses have disproportionately impacted the speech of pro-Palestinian viewpoints. As a report from Human Rights Watch found, “Meta’s policies and practices have been silencing voices in support of Palestine and Palestinian human rights.”

Human Rights Can Provide Guidance

Unfortunately, no solution or policy can perfectly balance freedom of expression and moderation of harmful speech. Still, companies have a helpful and powerful tool: human rights standards. The former United Nations (UN) Special Rapporteur on freedom of opinion and expression has already proposed that Internet companies use this tool to guide content moderation. Human rights standards are reflected in constitutions worldwide and international treaties. They can provide companies with robust guidance on balancing freedom of expression with other interests, such as fighting incitement to hatred.

Companies are not bound to follow international human rights standards, and laws may even protect platforms' ability to moderate users’ speech and content. In addition, AI providers may wish to adopt restrictive moderation policies to protect their reputations and avoid controversial content or to shield themselves from liability. Liability is a particularly relevant consideration in a new industry, such as generative AI, that is facing legal uncertainty. For instance, it is unclear whether Section 230, which has provided immunity in the U.S. to online platforms for publishing another person’s content online, applies to generative AI.

However, these considerations should not prevent AI providers from having robust policies that safeguard freedom of expression. Human rights can offer a firm basis for companies to build policies to protect freedom of expression. Companies such as Google and Anthropic have expressed the importance of human rights in their businesses, and Meta’s Oversight Board relies on human rights to inform its decisions.

Following human rights is particularly important for large digital companies, given their power over public discourse. The special responsibility held by large companies is recognized by the Digital Services Act, Europe’s online safety rulebook. This law requires that so-called “very large online platforms” assess and mitigate “systemic risks.” These risks include negative effects on fundamental rights, including freedom of expression and information. Regrettably, this obligation has been imperfectly applied by the European Commission so far. It is unclear how this law will apply to generative AI, but the European Commission has already taken its first actions.

Generative AI Providers’ Policies Are Not Aligned with Human Rights

So far, major generative AI providers’ usage and content policies are not aligned with international human rights standards regarding freedom of expression. In a recent report, The Future of Free Speech analyzed the usage policies of six major AI chatbots, including Google’s Gemini and OpenAI’s ChatGPT. Companies issue usage policies to set the rules for how people can use their models. With international human rights law as a benchmark, we found that companies’ misinformation and hate speech policies are too vague and expansive.

Our analysis found that companies’ hate speech policies contain extremely broad prohibitions. For example, Google bans the generation of “content that promotes or encourages hatred.” Though hate speech is detestable and can cause harm, policies that are as broadly and vaguely defined as Google’s can backfire. Similarly, the chatbot Pi bans “content that may spread misinformation.” However, human rights standards on freedom of expression generally protect misinformation. That is unless a strong justification exists for limits, such as foreign interference in elections. Otherwise, human rights standards guarantee the “freedom to seek, receive and impart information and ideas of all kinds, regardless of frontiers […] through any […] media of […] choice.”

Major generative AI providers like OpenAI and Google should correct their course now and align their practices more closely with human rights standards. This alignment includes having public, clear, and detailed general policies. It also requires complying with the principles of legitimacy and proportionality, ensuring that content restrictions are based on solid justifications and do not go beyond what is necessary.

Policies may change depending on the product they relate to or user preferences. For instance, companies may wish to have more restrictive policies for a tool integrated into Facebook that can automatically post comments than for a service that provides content to a specific user in a private chatbot. In addition, users have different preferences and may want to have more control. Companies could consider adopting a minimum content-moderation bar and enabling users to use filters and other mechanisms to manage access to content.

Users will inevitably question companies’ content-moderation decisions, as they have already started to do. The scrutiny will be particularly intense in a world with just a few AI providers. AI providers would be wise to have solid foundations, such as human rights, to justify their content policies.

Authors

Jordi Calvet-Bademunt
Jordi Calvet-Bademunt is a Research Fellow at the Future of Free Speech Project and a Visiting Scholar at Vanderbilt University. His research focuses on freedom of expression in the digital space. Jordi has almost a decade of experience as a policy analyst at the Organization for Economic Co-operati...

Topics