Home

Assessing AI and the Future of Armed Conflict

Prithvi Iyer / Sep 11, 2024

Military equipment on Ukraine's eastern front line in Donbas, 2014. Sebastian Castelier /Shutterstock

AI-generated media is increasingly being used to try to shape public opinion and mislead adversaries involved in violent conflicts, including wars. For instance:

  • In Ukraine, Russia has employed deepfakes and other AI-generated content to sow public distrust and also make trustworthy information harder to identify.
  • While its provenance is unclear, a deepfake video falsely depicting a US State Department official making comments on the use of American weapons inside Russia circulated earlier this year.
  • AI-generated images of casualties in Gaza were viewed by millions and used to make false claims about who is responsible for civilian deaths and to try to deceive people into believing atrocities that never happened– a distraction from the unfortunate number that did.
  • After Iran launched a missile and drone attack on Israel in April, fake and misleading posts using out-of-context images and video, as well as AI-generated material, went viral on social media platforms.
  • Meta, the company that operates Facebook and Instagram, said it discovered "likely AI-generated" related to the war in Gaza and Israel’s handling of it that appeared to target news organizations and US lawmakers.
  • And in Sudan, a country currently the site of perhaps the bloodiest armed conflict on the planet, a voice cloning campaign pushing “leaked recordings” of Omar al-Bashir, the former leader of Sudan, circulated on TikTok and other platforms.

A new report by WITNESS, a global non-profit organization that helps people use technology to defend human rights, explores the relationship between synthetic media and information integrity in conflict zones. It also discusses the implications of AI-generated media on conflict resolution and peace processes.

While generative AI is still a relatively novel technology and malicious use cases still limited, the report anticipates that “advancements in audiovisual generative AI over the next 2 to 3 years could have notable implications for global security and stability.” Specifically, the research notes that malicious actors will use AI technologies to exploit the general public’s inability to differentiate between real and fake content using strategies like nesting synthetic media and deepfakes “within layers of content that help to construct a persuasive but fabricated narrative.” The authors hope that their findings can provide a roadmap to governments and international organizations like the United Nations to address the challenges of synthetic media on global security and help leverage AI tools for conflict resolution and reconciliation.

WITNESS's research, a collaborative effort involving extensive consultations with non-governmental organizations (NGOs), humanitarian actors, and diplomatic personnel engaged in conflict resolution, provides a comprehensive overview of the different approaches to synthetic media generation and its impact on information integrity. The report then delves into the implications of synthetic audiovisual media on conflict dynamics and peace processes, offering actionable policy recommendations for mitigating the risks associated with generative AI in conflict settings.

Synthetic media and conflict dynamics

The report finds that less-sophisticated tactics like mis-contextualizing information and “shallow fakes” are still the primary means to generate false information at scale. In the realm of AI-generated media, the report notes two key trends: plausible deniability and plausible believability. Plausible deniability refers to the ability of bad actors to exploit the confusion regarding real vs. fake content to deflect responsibility for their wrongdoing. The news outlet Rest of World reported on the prevalence of this tactic during the Indian election, where politicians claimed a video was a deepfake to deflect responsibility despite evidence pointing to the contrary. Plausible believability, on the other hand, refers to the tendency for people to believe media that aligns with their biases. As the report notes, realistic AI-generated media “provides a way for supporters of a cause to cling to their existing beliefs and perpetuate entrenched narratives.”

The report identifies several potential challenges of synthetic media in conflict zones, further complicating peace and reconciliation efforts. Generative AI’s ability to fabricate evidence may cause relevant stakeholders to arrive at erroneous conclusions, while the confusion regarding what constitutes “real content” might compromise negotiations and erode public trust. Additionally, the proliferation of AI-generated content could place significant strain on the resources of diplomats, NGOs, and other stakeholders working in conflict zones. The need for specialized technology and expertise to monitor, verify, and respond to synthetic media is growing, yet many organizations are already struggling with limited resources. The report warns that the misuse of generative AI could lead to the manipulation of peace processes, the targeting of vulnerable groups, and the undermining of confidence in conflict resolution efforts.

Challenges for global security and stability

The report predicts that in the next 2-3 years, synthetic media will pose significant security risks. Here are some of the key concerns it enumerates:

  • Increased Personalized and Automated Propaganda: Generative AI is expected to lead to a rise in personalized and automated propaganda, where tailored disinformation is spread more effectively, manipulating public opinion and contributing to regional instability.
  • Distortion of Historical Narratives: AI-generated content may be used to create false or altered representations of historical events, distorting public memory and potentially exacerbating existing conflicts by fueling division and mistrust.
  • Sophisticated Psychological Operations: The report anticipates that AI-driven psychological operations could become more targeted and effective, with the potential to exploit individual and group vulnerabilities, leading to escalations in conflict and violence.
  • Amplified Gender-Based Violence: Generative AI is already being used to create harmful synthetic media, including non-consensual sexual imagery. However, this report predicts that such tactics will be used to target women holding public office, further amplifying gender biases and increasing the potential for offline violence.
  • Heightened Disinformation Dynamics: As generative AI continues to evolve, it is likely to amplify pre-existing disinformation dynamics, leading to more severe consequences. Simulated attacks on politicians, citizens, or critical infrastructure could create widespread panic and destabilize entire regions by triggering security alerts. Unlike traditional “shallow fakes,” AI-generated synthetic media produces highly realistic and convincing content, making it significantly harder to detect and debunk, thus escalating the panic and instability it causes. Additionally, AI-driven disinformation targeting financial institutions could lead to market manipulation, causing economic instability, inflation, and food insecurity. Moreover, by spreading falsified media that reinforces stereotypes or incites hatred, synthetic content could exacerbate social divisions, undermine social cohesion, and heighten the risk of intergroup conflict.

What to do about it

The report concludes with a series of policy recommendations for governments, civil society organizations, and international NGOs working in conflict settings. These recommendations are intended to help mitigate the risks posed by synthetic media and enable effective conflict resolution in the age of AI.

  • Increase investment in research, development, and deployment of “rights-respecting, independently audited detection technologies.” Specifically, the report urges governments to develop provenance and transparency standards to trace the origin of content without violating user privacy, which is currently a tall order.
  • Establish global standards for synthetic media generation and criminalize the creation and distribution of non-consensual imagery. Diplomats should also work towards formalizing binding agreements regarding the use of generative AI in conflict settings.
  • NGOs and civil society organizations must invest in AI media literacy programs in conflict zones to ensure that vulnerable groups are aware of AI and its threat to disinformation and public trust.

These recommendations should be implemented in consultation with local groups who are well-versed in their unique sociocultural contexts. As the report notes, this bottom-up approach “aims to ensure a coordinated, inclusive and effective response to the risks posed by synthetic media in conflict settings, protecting the rights and safety of affected communities.”

Authors

Prithvi Iyer
Prithvi Iyer is a Program Manager at Tech Policy Press. He completed a masters of Global Affairs from the University of Notre Dame where he also served as Assistant Director of the Peacetech and Polarization Lab. Prior to his graduate studies, he worked as a research assistant for the Observer Resea...

Topics