Home

Donate
Perspective

Artificial Intelligence and the Orchestration of Palestinian Life and Death

Sarah Fathallah / Aug 12, 2025

A mural painted on the rubble of a destroyed building in Al Thawra Street in Rimal, Gaza. Photo by Hla.bashbash CC BY 4.0, Wikimedia Commons

Criticism is mounting in response to the apparent starvation campaign waged by Israel in Gaza, prompting policymakers to probe the system of humanitarian aid delivery managed by the US and Israel-backed Gaza Humanitarian Foundation (GHF). Following Israel’s ban of the United Nations Relief and Works Agency (UNRWA), GHF was established in February 2025 to take charge of aid distribution in Gaza. A few months later, plans for mandating the use of facial recognition tools at distribution sites surfaced, leading rights groups to decry this hypothetical ‘biometrics-for-food’ mechanism of control and surveillance. It wasn’t until July that a GHF contractor corroborated the use of facial recognition cameras in distribution sites. This weaponization of biometrics and facial recognition marks yet another expansion of the use of artificial intelligence (AI) in the occupation of Palestine.

Artificial intelligence has been integral to Israel’s strategy as a military occupying power. For years, AI has underpinned many surveillance and warfare systems, including assault rifles that transfer coordinates, sensor-to-shooter systems that position targets, armored vehicles designed to maneuver more optimally, and drones that strike targets. More recently, investigative reporting by +972 Magazine brought to mainstream attention the use of AI tools such as Gospel, which generates lists of buildings to be targeted by airstrikes, in the most recent genocidal campaign in Gaza. However, the reliance on AI by Israel’s military is not new. Israel’s military called its 2021 military operation in Gaza “the first artificial-intelligence war,” announcing its use of Gospel along with many other AI programs.

The use of AI has been extensively situated in the context of surveillance that pervades Palestine and how Palestinians are turned into ‘data bodies.’ Much has also been written about the testing of AI systems through the lens of Palestine as a laboratory that subjects Palestinians to experimental and deadly violence. Yet these analyses do not directly address the reality that what is regarded as the successful deployment of these technologies by Israel requires Palestinians to be both living and dead. Collectively, Palestinian bodies provide Israel with data to train and feed its AI systems, at the same time as they are targeted and killed by those systems. The tabulation of Palestinian deaths is used to enhance the lethality of the systems targeting them.

Palestinians as sources of data

“After we have built the apps which manufacture the data, we’re using AI and machine learning to get the most out of all data accumulated through the apps and the sensors we have deployed.” — Israeli military senior official, CIO Magazine, February 8, 2020.

The lives of Palestinians are under an incessant and far-reaching data dispossession apparatus. In the West Bank, Israel operates a vast database that profiles nearly every Palestinian. Feeding this database is an extensive network of cameras and smartphone applications that capture facial imagery of Palestinians at every checkpoint and interaction with Israeli soldiers, without their accord or enrollment. In East Jerusalem, CCTV cameras cover 95% of public areas, leading Palestinians to feel constantly surveilled, even inside their homes. In the besieged Gaza Strip, the data of Palestinians is captured through monitoring drone footage and, more recently, via a new mass facial recognition program where Israeli soldiers erected facial recognition cameras along the corridors they urged Palestinians to evacuate. In addition to the modes of urban surveillance, Israel relies on wiretapping, surveilling social media accounts, and infiltrating telecommunications networks to digitally extract data. Israel proclaims the ability to listen in on every phone call and store recordings from those callsat a rate of millions per houron cloud platforms with “near-limitless storage capacity.” Israel uses a ChatGPT-like language model tool to make this information usable to intelligence officials, who can ask a chatbot, for example, if two people ever had an encounter. Because publicly available data in spoken Arabic dialect is difficult to come by on the internet, the language model was trained “on 100 billion words of Arabic” collected from intercepted private conversations of Palestinians.

This vast and invasive surveillance apparatus demonstrates how the lives of Palestinians are coercively rendered into sources of data by and for Israel. The biometric and personal data of Palestinians is captured under circumstances of duress or concealment as they pass through public areas, military checkpoints, or evacuation routes. Israel’s plans that consider setting up gated aid distribution sites conditioning Palestinians’ access to humanitarian aid on their submission to biometric tests is yet another no-choice scenario that forces Palestinians desperately seeking food into undergoing facial recognition and iris scans.

AI—specifically machine learning for algorithmic data processing—makes the data captured by Israel usable. Indeed, for AI systems of surveillance, social control, and warfare to be valuable, Israel requires data to train and constantly feed them. To do so, Israel needs Palestinians to be alive, to walk alongside facial recognition cameras, to have phone conversations, or to communicate via messaging apps.

Palestinians as targets

“We were constantly being pressured: bring us more targets. They really shouted at us. We finished [killing] our targets very quickly”. — IDF senior intelligence officer, +972 Magazine, April 3, 2024

Not only is the data captured from Palestinians fed into AI systems of social control and warfare, but that data is also used by the same systems to target them. The city of Hebron exemplifies this parallelism. Hebron contains a dense network of “motion and heat sensors, license plate scanners, facial recognition algorithms, 24/7 CCTV surveillance” that constantly profiles and monitors Palestinians. Those crossing military checkpoints in Hebron have their faces scanned and added to Israel’s surveillance databases against their will. Concurrently, Israel also installed SmartShooter, a remotely-controlled system of crowd control and dispersal in Hebron, which relies on AI to identify targets via facial recognition. Combined with a robotic weapon, this system can shoot stun grenades, unleash tear gas, and fire sponge-tipped bullets at the targets it identifies. In a more recent use of facial recognition data to target Palestinians, the facial scanning system deployed along “safe” routes in Gaza reportedly "identified" Palestinians who were later abducted, beaten, and killed.

Similarly, data obtained through social media monitoring and electronic surveillance has historically led to the imprisonment, torture, and death of Palestinians. Israeli officials have used such data to detain hundreds of Palestinians with no criminal record or connections to militant groups after Israeli politicians directed police to search their phones and social media accounts and to label content they shared or liked online as evidence against them. More recently, reports surfaced revealing the IDF’s use of Lavender, an AI tool that generates kill lists of human targets, in conjunction with Where’s Daddy, another AI tool that tracks and alerts the Israeli military when individual targets are in their family homes for that location to be bombarded. Both systems use digital surveillance records, such as cell phone records and WhatsApp contacts or social media connections, to determine targets to be bombed.

These examples illustrate how Palestinians are made to supply data to AI systems that pick out, incriminate, and attack them. Irrespective of their accuracy, Israel uses these data-trained and data-fed systems to target and transform living Palestinians into potential dead bodies.

Palestinians as prototypes

“When a 3-year-old girl is killed in a home in Gaza, it’s because someone in the army decided it wasn’t a big deal for her to be killed—that it was a price worth paying in order to hit [another] target.” — Israeli intelligence source, +972 Magazine, November, 2023

But data collection does not stop with the living. When Palestinians are killed, the facts and circumstances of their deaths may be used to augment the deadliness of the AI systems that will continue to kill others in the future. In Israel’s current genocidal campaign in Gaza, for example, military officers using Lavender were reportedly permitted to kill civilians in pursuit of militants. For context, Lavender assigns a rating to each individual in its system. When a Palestinian’s rating exceeds a predetermined threshold, they are added to the list of targets to be killed. And after running out of other targets meeting the first threshold, Israel’s military officers set a new, lower threshold to justify the subsequent killing of more targets. Palestinian deaths are thus turned into prototypical measures by which the lethal power of AI targeting systems can be calculated, and increased.

The deaths of Palestinians can also be used to iterate on and improve the precision and efficiency of AI systems that will target others in the future. For instance, Israel uses an AI system called Spice 250 for automatic target recognition that is fitted on the wings of warplanes carrying bombs. According to a representative of the Israeli company that supplies this system to the military, Spice 250 enables “future deep learning to be based on data extracted from earlier launches.” The representative goes on to describe that the system is able to automatically serve two functions: real-time assessment of the damage inflicted on a target by a bomb during a mission, as well as the post-mission extraction of data “for intel updates." In other words, with each bomb dropped, the system uses the data from that strike mission to improve the probabilistic model, and thus the precision, of future strike missions. Other Israeli companies also began advertising that the precision of their AI systems improved thanks to their deployment in Gaza.

In short, Israel turns Palestinians into prototypes, relying on the information generated by their deaths to expand the reach and enhance the AI systems that will target and kill others.

Conclusion

For the value of Israel’s AI systems of surveillance and warfare to be realized, three conditions must be met: (1) data must be captured from Palestinians in order (2) to target Palestinians in the moment and (3) better target others in the future. This concurrence epitomizes the seeming paradox inherent to Israel’s need for Palestinian life, even in death. But this paradox does not reveal a contradiction so much as a deliberate orchestration of Israel’s occupation of Palestine and control over Palestinians. Israel, in essence, requires Palestinian bodies to be both living and dead. For Israel, Palestinians must simultaneously be the lives from which surveillance data is captured, and the tabulation of deaths through which the lethality of AI systems is demonstrated and later improved.

Evidence suggests these AI systems and technologies have contributed to large-scale damage and the direct killing of tens of thousands of Palestinians, as well as in the indirect killing of countless others, whose deaths are precipitated by starvation and dehydration from the destruction of food and water systems, health complications from injuries and the targeting of medical and hygiene infrastructure, and other impacts of displacement and lack of shelter. However, assigning full blame to AI reduces concerns around the deployment of these systems into a matter of technology ethics and obscures the role of the ideologies and institutions aiming for the ethnic cleansing and genocide of Palestinians. Ultimately, it is these ideologies and institutions that animate the rendition of Palestinian life and death in support of those aims.

Authors

Sarah Fathallah
Sarah Fathallah (they/any) is a community organizer and a critical AI researcher focused on the carceral impacts of technology. They are currently pursuing an MSt in AI Ethics and Society at the Leverhulme Center for Future of Intelligence, University of Cambridge, where they research how artificial...

Related

When Algorithms Decide Who is a Target: IDF's use of AI in GazaMay 13, 2024
Perspective
Lethal Autonomous Weapons and the 'Right for Machine Hesitation'July 28, 2025

Topics