Home

Deepfakes and Elections: The Risk to Women’s Political Participation

Vandinika Shukla / Feb 29, 2024

Vandinika Shukla is a fellow at Tech Policy Press.

Amritha R Warrier & AI4Media / Better Images of AI / tic tac toe / CC-BY 4.0

The recent circulation of AI deepfake pornographic images of Taylor Swift on social media appeared to bring momentum to the fight for legal consequences for nonconsensual intimate imagery and other forms of synthetic media. But in time, this incident is more likely to serve as another emblem of the use of digital and social media to drive women out of political life. While Taylor Swift is unlikely to be deterred from public life by such an incident, the chilling effect of AI-generated images and videos used to harass women in politics is a growing phenomenon.

Despite early alarms from Black feminists and years before realistic deepfakes of Zelensky spreading misinformation about the Ukraine war or robo calls impersonating Joe Biden, realistic deepfakes of women and minorities have long been potent vectors of tech-facilitated gender based violence. In a blockbuster election year punctuated by new generative AI threats, the impact of online violence against women will have a silencing effect on the political ambitions and engagement of women and girls, decreasing their presence and voice in politics and public life.

But tech facilitated gender-based attacks can be a solvable problem. We need stronger legislation, better access to data, and an eye on the ground through election observers to create a new precedent for online safety of women and girls.

The landscape of online violence against women in politics

Online violence against women includes aggression, coercion, and intimidation that seeks to exclude women from politics simply because they are women. It targets individual women to harm them or drive them out of public life, but also sends a message that women don’t belong in politics – as voters, candidates, office holders, or election officials.

This year much of the world will go to the polls in an era of hyper-convincing synthetic media capable of generating images and audio with realistic vernacular and across multiple languages generated, and disseminated automatically and at-scale. Targeted attempts to deter women’s political participation will become more harmful not only for the 2024 election year but also for future generations where young women will make the calculation that it is too costly to enter the public sphere.

In a recent conversation with the Center for Humane Technology, legal scholar Dr. Mary Anne Franks spoke about how the rise in deepfake porn has shifted the landscape for women online, “All they really need is a few photos or videos of your face, things that they can get from innocuous places like social media sites. The next thing you know, a person can produce an image or a video of someone that makes it look like an intimate depiction, when in fact it never took place.” According to one often cited statistic, ninety-six percent of deepfakes online depict women in non-consensual pornography. The sheer volume of elections combined with both the accessibility and gamification of producing deepfakes – as we witnessed with 4chan message boards where a ‘daily challenge’ asked users to create adult AI images with the best proprietary engine instead of the more common open-source models – means the market for such materials is likely to boom.

Lucy Purdon, founder and director of the nonprofit advocacy organization Courage Everywhere, has worked on the Kenyan elections since 2013 and is an expert on gender justice and technology. Reflecting on the implications of this perfect storm for female politicians, she noted that “Online harassment will have a higher cost for female politicians because that harassment manifests in not just attacks on political competency but a cultural rejection of women. Women candidates are already too underfunded to challenge sexualised and gendered disinformation and will always risk stronger retaliation.”

The retreat of women online can have big implications for governance and democracy. A global survey of 14,000 girls in twenty-two countries found that 98% use social media, and half reported being attacked for their opinions before they were old enough to vote. Consequently almost 20% of respondents stopped posting their opinion. The playing field drastically changes in political contexts where a woman’s identity is already under constant threat. Online slurs or violence can quickly turn into direct physical threats. The National Democratic Institute measured the impact of attacks on women’s political participation in online political discourse in Kenya, Columbia, and Indonesia by tracking the Twitter engagement behavior of politically-active women before and after they experienced online attacks. NDI found “strong evidence” that online abuse “decreased women’s willingness to continue engaging in social media.”

Meanwhile when we look at the full ecosystem of women’s political participation, it is also noteworthy that women make up 80% of election workers in the United States, and they face unique, gendered harassment. Similarly, the Brennan Center recently reported that women state and local officeholders were three to four times as likely as men to experience abuse targeting their gender. Former Georgia election workers Wandrea “Shaye” Moss, Helen Butler, and Ruby Freeman who were targeted with false accusations of voter fraud in Georgia are just a few of the many Black female election officials and workers who reported they faced harassment, threats, and criminal charges that forced some from their job.

While focused research on online harms against women in politics remains hard to find, a 2021 study published by the Wilson Center put a spotlight on the vitriol directed at women in public life. The report studied online conversations about female politicians including then New Zealand PM Jacinda Ardern, UK Secretary of State for the Home Department Priti Patel, US Senator and now Vice President Kamala Harris (D), Rep. Alexandria Ocasio-Cortez (D-NY), Rep. Elise Stefanik (R-NY), Senator Susan Collins (R-ME), across six social media platforms. The report found over 336,000 individual pieces of gendered and sexualized abuse posted by over 190,000 users directed at the report’s 13 research subjects. It also highlighted that malign creativity – the use of coded language; iterative, context-based visual and textual memes; and other tactics to avoid detection on social media platforms—is the greatest obstacle to detecting and enforcing against online gendered abuse and disinformation. “We have been raising the alarm for a while – deepfake porn, online gendered abuse and disinformation are national security issues and we continue to lose access to data to get a full picture of abuse against women in public life. We studied 336,000 pieces of abuse, but we know the issue is bigger than that.”, shared Nina Jankowicz, the report’s lead author and disinformation expert who wrote the book How to Be a Woman Online and herself became the target of a widely covered, coordinated gendered disinformation campaign.

Furthermore, the report finds that abuse often occurs outside of highly visible areas. For example, on Twitter, rather than abuse being sent exclusively in reply to a target’s Tweet or as a Quote Tweet, or screenshot of the Tweet, abuse can be sent in reply to other content that may or may not tag the target. “Users yell at their target as much as about them. So, unless we have access to the specific research subject’s account, we don’t know the breadth of the challenge,” Jankowicz emphasized.

This data gap is a key challenge in combating tech-facilitated gender-based violence. The lack of data means that advocates continue to be ill-equipped to anticipate or track the new ways that generative AI will impact tech-facilitated gender-based violence. Dr. Rumman Chowdhury’s latest research for UNESCO anticipates synthetic histories and compositional deepfakes will be among the many new and different methods of harming individuals. If we imagine a coordinated disinformation campaign intended to fabricate a reputationally harmful story – compositional deep fakes will include the generation of realistic audio, video, text (such as ‘fake’ news articles), and images that reinforce the story.

Looking to the field

One potential solution to help alleviate these challenges is to leverage election observers to catalogue women’s political participation. Election observers give visibility and document pressure points for others to pick up on. “Election observers produce highly contextual reports as they are on the ground connecting the dots, ultimately feeding recommendations back to the state on how to strengthen democratic processes and highlighting their obligations under international human rights law,” says Purdon. Local election observers represent an underused resource who could be trained to track the impacts of gendered disinformation. But, in many jurisdictions observers face funding shortages and significant political risks. Election observers have helped build public trust in the election process and are now preparing to monitor targeted disinformation especially towards female candidates. “Election Observation Reports” such as those from National Democratic Institute and International Republican Institute catalogue important trends about gender inclusion and the media ecosystem that build understanding of the impact of AI on women’s political participation, and help contextualize interventions.

Legal levers of change

Legal restrictions especially related to non-consensual pornographic deepfakes or AI-generated pornographic media in general have been few and far between. But just as Taylor Swift fans created literal seismic activity at her concerts, they have also helped mobilize bipartisan action in the US Senate. The Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024, or the “DEFIANCE Act of 2024” would add a civil right of action for intimate “digital forgeries,” creating a consequence for depicting an identifiable person without their consent. With a lowered standard of proof, the provision would let victims recover financial damages from anyone who “knowingly produced or possessed” the image with the intent to spread it.

The real question, however, is whether this proposed federal legislation – even if it were to advance in a divided Congress – will produce results, especially for women without the resources or social capital of high-profile public figures. “Our legislature is not fit for purpose in this era because it doesn’t understand or consider online crimes as real crimes. We need a criminal statute at the federal level that will disincentivize this behaviour but also draw funding for greater training and education for law enforcement,” said Nina Jankowicz, who shared her own frustrations with costly and lengthy civil suits as the only legal recourse for her as a victim of deepfake porn.

Stronger legal restrictions can have a few positive externalities. They can compel social media platforms to prioritise and take seriously enforcement action against this behavior, and to create stronger consequences and costs for those who engage in the creation or dissemination of deepfake pornography. And, they can steer necessary resources in law enforcement to support victims. The Online Safety Act in the United Kingdom, South Korea’s Act on Special Cases Concerning the Punishment, Etc. of Sex Crimes, California’s Assembly Bill 602, and Virginia House Bill No. 2678 offer strong models for legal recourse. A comparative analysis of these legal approaches reveals a potential gold standard in South Korea’s legislation, which prohibits the creation and distribution of non-consensual pornographic deepfakes, does not require malicious intent on part of the perpetrator, and makes violations of the law punishable with imprisonment.

Following the high profile incident involving Taylor Swift and with dozens of elections on the horizon, there is an opportunity to leverage growing political will to make the internet safer for women and girls. Stronger regulation and coordinated action across the electoral apparatus will be essential to disincentivize harmful gender discrimination online. We can no longer afford to watch deepfakes and tech-facilitated gender-based violence widen the digital gender divide and alienate future generations of women from political life.

Authors

Vandinika Shukla
Vandinika Shukla is a human rights and technology policy specialist. She has designed national gender policies at UN Women, built electoral campaign AI products to represent BIPOC voices in policymaking at MIT Media Lab, and launched a community organizing portfolio at Harvard. Vandinika writes on t...

Topics