Home

Gonzalez v. Google: A Perspective from the Cato Institute

Ben Lennett / Feb 20, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy.

Ahead of oral arguments in Gonzalez v. Google, LLC at the Supreme Court next week, I sent a short questionnaire to gather perspectives and legal opinions of different organizations that filed briefs with the Court. It asked organizations for their perspective on the Gonzalez case, and the arguments by the Petitioner and the U.S. government that urged the Court to narrow Section 230 protections. I also asked for opinions on the Zeran v. AOL decision that largely shaped the U.S. courts’ interpretation of Section 230’s immunity protections.

Below are responses provided by Will Duffield, policy analyst at the Cato Institute. Read the Center’s full amicus brief here.

Why does this case matter to the organization you represent?

Cato’s mission is to advance individual liberty, limited government, free markets, and peace. Online, individual liberty means being able to find the speech and speakers each of us wants to hear. Algorithms, especially personalized recommendation algorithms, help users sort through the billions of speakers online to find whatever is personally relevant. Injecting other concerns into personalized discovery algorithms via litigation is bound to hamper their ability to serve users. Liability would also place new costs and constraints on platform developers, empowering monied incumbents at the expense of entrepreneurs, creators, and users. Imposing liability on algorithmic recommendation would both limit individual liberty online and fossilize the market for algorithmic recommendation tools, so we hope the Court will see the value in previous rulings on the issue and preserve what has worked well for the past twenty-six years.

What is your position generally on the merits of the Gonzalez case? Is Google liable if its algorithms recommend terrorist videos to users? Is it liable if it monetizes those same videos with ads?

A liberal society shouldn’t expect private companies to police what is and is not a “terrorist video” under pain of a lawsuit. In a pluralistic society, people will have different definitions of extremism and might view extremist content for a variety of reasons. In Gonzalez, there is no evidence that any of the Bataclan attackers were radicalized by YouTube videos, nor is Google alleged to have specifically favored ISIS content, so I don’t see how Google could be seen as at all culpable. Most platforms, YouTube included, demonetize violent or extremist content because advertisers don’t want their products to be seen alongside it and ban terrorist groups outright. This is the opposite of willful support.

More generally, as long as any revenue sharing ceases when the platform is made aware of its recipient, I don’t think liability is practicable or desirable. Imposing liability for de minimis ad revenue sharing would make it much harder for users to make a living by speaking, regardless of their topic or audience. The lost incentives to knowledge production outweigh whatever minimal impact such liability might have on terrorist financing.

Does Section 230 immunize Google and other social media companies from liability more generally when they recommend third-party content to users?

Section 230 immunizes intermediaries when they recommend third-party content because algorithmic recommendation is inextricable from organization and publishing. There is so much potential content online that websites must always pick one thing over another to feature on a user’s screen.

Unless a platform does something to substantially alter user-uploaded content such that it could be seen as the platform’s speech, recommended content is still, per Section 230, “information provided by another information content provider.” A platform's decision to organize speech on the basis of gleaned user preferences does not make the recommended speech that of the platform.

Courts have recognized this in cases such as Force v. Facebook, where the Second Circuit held that Section 230 protected Facebook friend suggestions. There, the court held that because Facebook merely matched the contents of user profiles “based on objective factors applicable to any content, whether it concerns soccer, Picasso, or plumbers,” Section 230 protected its recommendations.

A platform must directly and “materially” contribute to the unlawfulness of user speech to become liable for it. This standard was most clearly satisfied in Roomates.com, where the roommate matching website required users to submit discriminatory racial preferences.

Liability for organizing or arranging speech can suppress it just as assuredly as liability for its content. Section 230 was intended to free intermediaries from liability for hosting speech. Attempts to read an exception for algorithmic arrangement into the statute not only ignore its letter, but they also clearly violate its spirit.

Do you agree with the Zeran v. AOL decision that strongly shaped how courts interpreted Section 230?

Zeran v. AOL was concerned with false anonymous messages posted to one of AOL’s bulletin boards containing offensive messages along with Kenneth Zeran’s phone number. The Fourth Circuit was correct to recognize that Section 230 protected AOL. It appreciated that although the anonymous poster had used AOL’s service to harass Zeran, responsibility for the post rested solely with their author because “Congress made a policy choice, however, not to deter harmful online speech through the separate route of imposing tort liability on companies that serve as intermediaries for other parties' potentially injurious messages.”

Expecting AOL to review the contents of every message posted to its bulletin boards would prevent it from offering bulletin boards, harming lawful speakers to preclude opportunities for unlawful behavior. This is exactly the outcome Section 230 was intended to avoid. Indeed, with the filtering technology of the 1990s, AOLs moderation burden would be impossible. Modern algorithms do a much better job of screening unlawful or defamatory speech, but they are still far from perfect. This makes it all the more important to refrain from imposing legal liability on their inevitable mistakes.

If the court relies on the arguments in your brief to make its decision, how will it impact social media and the internet more broadly?

If the Court relies on our arguments, it will reify a textual understanding of Section 230 that has safeguarded a remarkable and diverse ecosystem of platforms, apps, and websites for more than twenty years. Ideally, the court will increase, rather than reduce, the certainty offered by Section 230’s clear rules of the road, and the internet can continue to develop in a free and unencumbered fashion. Protecting algorithmic discovery and search tools will ensure that the internet remains “a forum for a true diversity of political discourse, unique opportunities for cultural development, and myriad avenues for intellectual activity,” as envisaged by Section 230’s drafters.

Authors

Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics