Home

Imagining the Possibilities for an Online Civil Rights Act

Gabby Miller / Mar 4, 2024

First convening of the US Senate's "AI Insight Forum," September 13, 2023. Source.

Last November, after one of his ‘AI Insight Forum’ convenings, Senate Majority Leader Chuck Schumer (D-NY) told reporters that AI legislation was “months, not days, not years” away, with one of the highest priorities being around AI and elections. While Congress has put forth several election-related AI bills, it’s as of now unclear what the broader legislative framework might look like. And with US elections just eight months away, in November, the window for passing AI regulation – and for it to have any real impact before voters cast their ballots – is narrowing.

The Lawyers’ Committee for Civil Rights Under Law is attempting to fill this void with its model artificial intelligence bill, the Online Civil Rights Act. Released in December, its main intent is to “address the discriminatory outcomes, bias, and harm arising from algorithmic systems, which form the basis of artificial intelligence products and large language models.” It also emphasizes the need for a tech-neutral regulatory and governance regime that mitigates and prevents both current and ongoing harms caused by AI.

Well-documented harms include discriminatory algorithmic systems used in facial recognition, tenant screening, credit scoring, and more, all of which disproportionately harm Black communities and other people of color. These known harms are part of the reason that the Lawyers’ Committee is lobbying Congress to adopt legislation that both promotes the responsible development and use of AI, and prevents unsafe AI tools from being made available.

The Act primarily promotes and prevents harms through its “duty of care” provision, which would require that algorithmic systems are evaluated before their release, reasonable steps are taken to prevent harm, and their uses are neither unfair or deceptive. To meet these obligations, developers and deployers would be required to establish governance programs with certain contractual obligations, particularly around data collection and processing procedures. And to ensure robust research and accountability, companies would be required to publish long-form disclosures of their evaluations and impact assessments. The hope is to not only protect consumers, but build trust with the systems they use. Additional trust might be built through the Act’s requirement that all commercial content created or modified using generative-AI be clearly labeled as such.

Data security is also top of mind, with the AI bill restricting the collection of personal data and maintaining robust security over it. Companies would be prohibited from collecting, processing, and transferring personal data beyond what is “reasonably necessary and proportionate,” according to the Act’s fact sheet. Developers also are not to use personal data to train their algorithms without affirmative express consent, and individuals have the right to access, correct, and delete any data used in this training or other deployments. This aims to counter the existing “notice and consent” framework, where companies’ dense privacy policies give virtually free rein to use consumers’ personal data however they choose. These practices have historically led to “security risks, discriminatory practices, predatory advertising, and fraud based on personal information.”

Failure to follow the Act’s provisions could result in the Federal Trade Commission (FTC) bringing enforcement actions. Individual consumers also have a “private right of action” to bring civil actions against violators, and state authorities may join in on these suits or file their own. The Act additionally clarifies that a person offering products like AI chatbots or image generators would not receive Section 230 immunity for AI-generated content–something Senators Josh Hawley (R-MO) and Richard Blumenthal (D-CT) have pushed for with their proposed No Section 230 Immunity for AI Act.

President and Executive Director of the Lawyers’ Committee for Civil Rights Under Law, Damon Hewitt, participated in the ‘AI Insight Forum’ session focused on democracy and elections last fall. This closed-door series, made up of nine sessions and advertised as a non-traditional approach to committee hearings, was meant to meet the current “moment of revolution” around AI, according to Sen. Schumer. Hewitt’s forum was divided into two parts, with the first half focused on the risks that AI poses for civil rights and civic participation and the second on watermarking AI-generated content, both concerns ahead of the US elections this fall.

Hewitt told Tech Policy Press after the forum that his goal was to convey to Senators and other stakeholders in the room that there’s an urgent need for US federal legislation that can address AI-specific threats to election integrity. He cited an example from the 2020 elections, where right-wing activists made 85,000 robocalls largely to Black Americans to discourage them from voting by mail using “primitive” technology “on the cheap.” AI would act as a “force multiplier” for these kinds of existing voter suppression efforts and illegal schemes, Hewitt said at the time.

With no current comprehensive regulation of AI or algorithmic systems and no federal privacy law, the elections feel ripe for new strains of AI-powered ploys by bad actors looking to jeopardize voter integrity. Already, the campaign trail has seen AI-generated robocalls imitating President Joe Biden's voice to discourage Democrats from voting in New Hampshire's primary. However, the Online Civil Rights Act feels like an attempt to translate the ideas that the Lawyers’ Committee for Civil Rights Under Law brought to the US Capitol last fall into action. And without meaningful intervention, as the Committee argues, the tools of the future will be used to lock users into the mistakes of the past.

Authors

Gabby Miller
Gabby Miller is a staff writer at Tech Policy Press. She was previously a senior reporting fellow at the Tow Center for Digital Journalism, where she used investigative techniques to uncover the ways Big Tech companies invested in the news industry to advance their own policy interests. She’s an alu...

Topics