Home

Donate

The Supreme Court Needs to Consider Where Tech is Going in the Gonzalez and Taamneh Cases

Justin Hendrix, Ben Lennett / Feb 28, 2023

Ben Lennett is a tech policy researcher and writer focused on understanding the impact of social media and digital platforms on democracy. Justin Hendrix is CEO and Editor of Tech Policy Press.

Shutterstock AI Image Generator with prompt, "Supreme Court of the Future"

Last week, the Supreme Court heard oral arguments in two related cases, Gonzalez v. Google, LLC and Twitter, Inc, v. Taamneh. Both cases involve the families of American citizens killed in ISIS terrorist attacks that are suing social media platforms seeking damages for their role in supporting the terrorist group. Still, they ask two different questions of the Court. In the Gonzalez case, the Court must determine whether Section 230 liability protections apply when platforms “make targeted recommendations of information [or content]” provided by third parties. The Taamneh case involves whether a platform or any other entity, like Twitter, that provides a widely available service can be civilly liable for aiding and abetting international terrorism under U.S. terrorism laws.

These two discussions are intrinsically linked, because to hold a platform liable, you not only need to satisfy the statutory requirements of the Anti-Terrorism Act (ATA) or the Justice Against Sponsors of Terrorism Act (JASTA), but courts also need to determine whether such claims are barred by Section 230. For observers of the Section 230 debate, the particular issues in the Gonzalez case are more well-known. Claims of harm dependent on the content that a user posts on YouTube are prohibited by Section 230 and platforms cannot be held liable for their good faith efforts to remove objectionable content. And despite concerns that the Court would upend Section 230, the Justices did not openly question these general parameters during oral arguments. As important, they appeared concerned about drawing a line that could exclude content recommendations from 230 protection and not undermine the entire internet or social media.

The Taamneh case is much harder to follow because it involves a complex debate about the meaning of the text in the ATA and JASTA and how to determine what liability claims are applicable under the laws. For example, under Section 2333 of the U.S. Code, an entity that “aids and abets, by knowingly providing substantial assistance” to an act of terrorism can be held civilly liable for injuries to a U.S. citizen. The discussion in Taamneh requires the Court to weigh different elements of what it means to “knowingly provide substantial assistance” and what kinds of actions by a social media service or other entity meet those elements. If it draws such distinctions too narrowly, then it could make it harder to bring cases not just against tech platforms but also charities, banks, and other entities that provide support to terrorist groups. If it draws them too broadly, these laws could implicate any number of businesses that provide general services to the public.

The Underlying Tech Isn’t A Static Variable

Many observers of the Gonzalez case remain concerned that the Court could draw a line in a manner that would jeopardize social media and compel platforms to overly police speech and remove content to avoid liability. But in the context of future tech, the converse is also problematic. Suppose the Court finds in favor of Google and draws too large an immunity boundary around content recommendations, including algorithmic recommendations. Such a decision could foreclose any mechanism to hold these platforms accountable for harms resulting from their actions and conduct in developing the algorithms themselves.

Similarly, suppose the Court agrees with Twitter in the Taamneh case, but in doing so, it encourages companies to engage in practices that willfully ignore what’s happening on the platform to get around JASTA or similar laws. In that case, it may incentivize tech companies to design platforms in such a way that encourages them to rely exclusively on automation and AI that involve no human input or monitoring.

Can the Court avoid this? We’re not sure- it will depend on the language of its decisions. What matters in both examples is that the Court does not create a circumstance where cases are always dismissed before the plaintiffs can argue them in court. Setting aside the merits of the Gonzalez complaint, for example, as the Deputy Solicitor General, Malcolm Stewart, pointed out at the closing of his statements before the Court:

..the situation we’re concerned with is what if a platform is able through its algorithms to identify users who are likely to be especially receptive to ISIS’s message, and what if it systematically attempts to radicalize them by sending more and more and more and more extreme ISIS videos, is that the sort of behavior that implicates either the text or the purposes of Section 230(c)(1), and we would say that it doesn’t.

It is unclear whether Google’s algorithms did this. However, if the Court automatically says content recommendation algorithms are immune under Section 230, can the courts or a jury even look at evidence that makes such a case? Moreover, if a platform willfully ignores how its services are used by terrorists or those affiliated with terrorists, can it automatically avoid liability under JASTA or any other law that relies on an entity ‘knowingly’ contributing to the harm in question?

Automated Content Transformations and AI Will Raise New Questions

These questions are further complicated as tech firms employ more and more sophisticated, automated mechanisms to package, manipulate, optimize, and target content on their platforms. For example, last year Meta CEO, Mark Zuckerberg, indicated that the company would double how much of a user’s feed on Facebook and Instagram is populated by AI-generated recommendations. What happens if the company’s AI models or machine learning systems violate civil rights or product liability laws in terms of how they target content to users or groups of users? Does Section 230 immunize that type of platform conduct? What if the company has no idea it is happening or doesn’t understand why, because the AI is making the decisions?

Social media companies are also increasingly using AI to enhance and transform user content. For instance, according to one researcher, Google automatically rewrites headline tags on its search engine results pages as much as a third of the time. Research has found that headlines can guide a reader’s interpretation and understanding of content and that “[s]earch engines can impact user perception about the credibility of the news not only through the selection of stories (and sources) on the results page, but also through the rankings in which these stories appear,” according to a recent paper in the Journal of Online Trust and Safety. Should a social media platform or a search engine that is substantially rewriting headlines, snippets, or other content components in a way that serves the interests of a violent terrorist group that seeks engagement for its content, be shielded from liability?

What of other transformations? In the oral argument in Gonzalez v. Google, there was substantial discussion of the role of thumbnails– images drawn from video frames used to promote videos on YouTube. Eric Schnapper, the counsel for the plaintiff, argued that when Google goes “beyond delivering to you what you’ve asked for, to start sending things you haven’t asked for, our contention is they’re no longer acting as an interactive computer service,” and thus should not have protections under Section 230.

While it did not appear that this argument got much traction, how might the Court’s view on that question change if, for instance, Google ran every thumbnail through a neural network to upscale the image to a higher resolution, potentially making the video more attractive to click on? Or what if Google offered YouTube creators other automated mechanisms to improve or manipulate content, such as upgrading or optimizing audio? At what point does the platform become a co-creator or co-producer of the material?

Jonathan Zittrain, a professor of law and computer science at Harvard, told us that the Court could potentially “close off avenues that need to be considered” if it is not careful in how it shapes its decision or if it issues dicta that endorse a broader immunity for algorithms and other mechanisms involved in recommendation systems. While it’s not necessarily the case that a platform like YouTube should be held liable for automated enhancements or enticements to content posted to its services, it may be that the implications of such transformations need to be considered carefully on a case-by-case basis.

Justice Elena Kagan raised the important question of the Court’s institutional competence on technical matters during the Gonzalez oral argument. “I mean, we’re a court,” said Justice Kagan. “We really don’t know about these things. You know, these are not like the nine greatest experts on the Internet.” Harvard’s Zittrain says, “the Court could reasonably either raise or fold. It could raise by making a broad statement on Section 230 immunity and counting on a spate of follow-in percolation in the lower courts to further flesh it out, rather than narrowly tailoring its decision to the Gonzalez case. Or it could fold by taking heed of Justice Kagan’s institutional competence concerns and declining to establish any new guidelines for how courts should interpret the law.”

The latter could be the most prudent choice, particularly given how difficult it is to universally exclude algorithmic recommendations and generative AI technologies from Section 230 protection or, conversely, assume they are always immune. James Grimmelmann, a professor at Cornell Law School and Cornell Tech, told The Markup that “[i]t wouldn’t be a bad thing if the courts had to work this out, one case at a time, over the course of several years. Generative AI is really complicated, and we barely understand how it works and how it will be used. Taking time to work out the legal rules gives us more of a chance of getting them right.”

Given such concerns, the Court should be careful in its choice of words when it makes its decisions in these two cases. Justices need to be sure their language is considerate not just of yesterday’s technology, but also that of tomorrow.

Authors

Justin Hendrix
Justin Hendrix is CEO and Editor of Tech Policy Press, a nonprofit media venture concerned with the intersection of technology and democracy. Previously, he was Executive Director of NYC Media Lab. He spent over a decade at The Economist in roles including Vice President, Business Development & Inno...
Ben Lennett
Ben Lennett is managing editor for Tech Policy Press and a writer and researcher focused on understanding the impact of social media and digital platforms on democracy. He has worked in various research and advocacy roles for the past decade, including as the policy director for the Open Technology ...

Topics