How NDAs Became the AI Industry’s Tool for Surveillance and Silence
Nandita Shivakumar, Shikha Silliman Bhattacharjee / Jun 20, 2025Nandita Shivakumar is an independent researcher and communications consultant who collaborates with Equidem. Shikha Silliman Bhattacharjee is Head of Research, Policy, and Innovation at Equidem.

Max Gruber / Better Images of AI / Clickworker Abyss / CC-BY 4.0
The artificial intelligence boom isn’t powered by algorithms alone—it runs on the invisible labor of millions of workers across the Global South. These are the data labellers and content moderators who spend their days reviewing traumatic imagery, tagging data, and performing the unseen work that makes machine learning systems function.
Yet the same tech companies that promise to build an ethical digital future rely on a familiar tool—the non-disclosure agreement (NDA)—to silence these workers through a machinery of fear and self-censorship. As a young Kenyan data labeller, Onyango, subcontracted by Meta and OpenAI told our organization, Equidem, as part of an extensive global investigation on the labor conditions in the digital world of work across Africa, Asia, and South America:
“I cannot share anything with my family. They are the ones who are the closest to me, yet I cannot share anything because I have signed an NDA. I have kept everything bottled inside of me.”
His words speak not just to trauma, but to a system designed to make workers silence themselves.
Surveillance without watching
Surveillance often operates not through constant observation, but through its possibility. The threat of being watched becomes internalized, causing people to discipline themselves.
In the world of AI training and content moderation, NDAs serve a similar purpose. These agreements extend far beyond protecting trade secrets, barring workers from speaking about their jobs—even to therapists or family members—under the constant threat that any disclosure could be seen as a violation of the NDA, leading to termination or legal action.
Workers live in fear: what can they say, to whom, and when?
We saw this fear firsthand while conducting interviews for Equidem’s report, “Scroll. Click. Suffer: The Hidden Human Cost of Content Moderation and Data Labelling.” We reached out to hundreds of data labellers and content moderators across Kenya, Colombia, Ghana, and the Philippines. The numbers told their own story: in Colombia, 75 out of 105 workers declined to speak. In Kenya, 68 out of 110 refused. When we asked organizers and union leaders in these countries why so many workers were unwilling to talk to us, the answer was unequivocal: NDAs had created a culture of fear and enforced silence.
“So many workers come to us shaking, terrified by what they’ve signed,” said Ephantus Kanyugi, Vice-President of the Data Labelers Association of Kenya. One Colombian former data labeller who worked on a Meta contract put it even more starkly: “People won’t even say the word ‘NDA.’ That’s how scared they are.”
The architecture of impunity
NDAs don’t just silence workers—they help uphold a system of control that extracts labor while shielding tech companies and their billionaire owners from responsibility.
In the AI supply chain, most content moderators and data labellers are hired through third-party vendors or business process outsourcing (BPO) firms, often based in countries with weak labor laws and social protections. These layers of subcontracting are not accidental —they’re designed to shield companies like Meta, OpenAI, and ByteDance from responsibility for the people doing their most traumatic work.
This arrangement lets platforms reap the benefits while deflecting blame when things go wrong. Take the case of Ladi Anzaki Olubunmi, a content moderator reviewing TikTok videos under contract with the outsourcing giant Teleperformance, who died after collapsing from apparent exhaustion. Her family says she had complained about extreme workload and fatigue. Yet ByteDance, the parent company behind TikTok, has faced no accountability.
Meanwhile, the work itself is brutal. Moderators must view some of the most disturbing content online—rape, murder, suicide, child abuse—often reviewing up to 1,000 videos per shift, with just seconds to process each clip. And under sweeping NDAs, many workers are too afraid to speak about what they’ve seen— even to their families and their therapists. Our research for “Scroll. Click. Suffer” has documented over 60 cases of severe psychological harm, including depression, PTSD, insomnia, and suicidal thoughts. Another 76 workers described physical symptoms—chronic fatigue, panic attacks, migraines, and more. And these are only the workers who felt safe enough to speak.
NDAs as surveillance infrastructure
This is not a bug in the system—it is the system. NDAs serve a distinct political and economic function: they allow corporations to extract maximum value from labor while minimizing accountability.
By silencing workers, NDAs prevent public scrutiny of exploitative working conditions, inhibit unionization and collective bargaining, and shield tech giants from liability even as abuses occur down their supply chains. They enable companies to claim ignorance while benefiting from the very structures that produce it.
This is what we might call fragmented accountability—where harm is spread across jurisdictions and actors so that no one entity can be held responsible, and no one worker can safely speak out.
And the implications of this go far beyond individual workers. The systems that these silenced workers help build—AI models used in content moderation, search algorithms, and recommendation engines—now shape what billions of people see, say, and do online. When the people who train and feed these systems are bound by NDAs and too afraid to speak about harmful working conditions, the public loses access to crucial knowledge about how AI actually works. In effect, the legal silencing of workers becomes a barrier to public accountability. If we can’t interrogate the conditions under which these systems are built, we can’t meaningfully govern the technologies that increasingly govern us.
Resisting NDAs, reclaiming power
What’s needed now is not reform at the margins, but a political reckoning with how some corporate legal tools have become instruments of authoritarian control in the workplace, and digital workers are on the frontlines of this.
That means restricting NDAs to their original, narrow scope—protecting proprietary data, not blanket bans on speaking about working conditions. It means establishing international protections for whistleblowers and subcontracted workers, particularly those embedded in transnational tech supply chains where corporate accountability is weakest. It means mandating transparency about the labor behind AI systems: who is training them, under what conditions, and at what human cost. And it means guaranteeing that all workers—regardless of employer or location—have the right to organize, access mental health care, and speak freely without fear of retaliation.
Some governments are beginning to act. In the United States, the Speak Out Act, passed in 2022, limits the use of NDAs in sexual harassment cases. But this is only a first step. The tech sector’s use of NDAs—particularly in the Global South—remains largely unregulated and dangerously unchecked.
Workers like Onyango are not asking for much. They want to be able to talk openly with their families. To speak honestly with therapists. To join unions without fear. To share the burden of what they’ve seen without risking their livelihoods or facing legal sanction. They want a future where the cost of building AI isn’t their silence or their trauma.
The rest of us should want that too, because the systems we create reflect the values we hold. A digital future built on secrecy, suppression, and suffering is no future at all.
Authors

