Home

Donate
Perspective

Building Trust in Synthetic Media Through Responsible AI Governance

Asheef Iqubbal / Jun 23, 2025

Asheef Iqubbal is a technology policy researcher at CUTS International, a research, advocacy, and capacity-building organization headquartered in Jaipur, India.

Trust in public information ecosystems is crucial for democratic debate and socio-economic growth. While digital media has expanded access to information, it has also enabled the spread of mis- and disinformation, which is being compounded by generative AI. Although synthetic media produced with generative AI, including deepfakes, can be used for constructive purposes in areas such as education and entertainment, misuse, such as creating non-consensual intimate content or spreading misinformation, raises significant concerns. Unlike traditional misinformation, synthetic media often appears convincingly real and is harder to identify, with studies showing that many people perceive AI-generated false media as genuine. The World Economic Forum warns that AI-driven falsehoods, which can erode democracy and deepen social polarization, are an immediate risk to the global economy. India is more vulnerable to this due to low digital literacy and the waning legitimacy of legacy media.

Regulation intended to address synthetic media harms is evolving globally. The European Union’s AI Act classifies deepfakes as a “limited risk” category, requiring transparency disclosures. The United States has proposed legislation targeting specific issues, such as the DEFIANCE Act for non-consensual explicit deepfakes and the No AI FRAUD Act to protect personal likenesses. The Take It Down Act, which was signed into law by President Donald Trump last month, aims to ensure the removal of non-consensual intimate synthetic media. The UK’s Online Safety Act criminalizes the creation of intimate deepfakes and imposes obligations on social-media platforms. India’s MeitY has also issued an advisory to label deepfakes. These approaches taken by different jurisdictions demonstrate a spectrum of regulatory design choices, from labeling to obligating social media platforms to remove malicious synthetic media, which needs to be critically examined.

Trust, privacy, accountability

Emerging regulations aimed at addressing synthetic media harms are largely reactive, focusing on measures such as removal from social media platforms and identification of synthetic content. While this is a step in the right direction, these measures alone do not address the creation of malicious synthetic media and its associated harms. Even when the media is clearly labeled as artificial, it can still cause real damage. Consider a woman depicted in non-consensual, AI-generated pornographic content–she/ they may still experience shame, sexual objectification, and distress, even if the media includes a disclaimer stating it is synthetic. Similarly, labeling alone is insufficient, as harmful content can be viewed or shared thousands of times before it is detected or removed.

Relying solely on labeling tools faces multiple operational challenges. First, labeling tools often lack accuracy. This creates a paradox: inaccurate labels may legitimize harmful media, while unlabelled content may appear trustworthy. Moreover, users may not view basic AI edits, such as color correction, as manipulation, while opinions differ on changes like facial adjustments or filters. It remains unclear whether simple colour changes require a label, or if labeling should only occur when media is substantively altered or generated using AI. Similarly, many synthetic media artifacts may not fit the standard definition of pornography, such as images showing white substances on a person’s face; however, they can often be humiliating. These inconsistencies may lead platforms to apply labeling unevenly, creating gaps and making it difficult to identify malicious synthetic media across different platforms.

Second, synthetic media use cases exist on a spectrum, and the presence of mixed AI- and human-generated content adds complexity and uncertainty in moderation strategies. For example, when moderating human-generated media, social media platforms only need to identify and remove harmful material. In the case of synthetic media, it is often necessary to first determine whether the content is AI-generated and then assess its potential harm. This added complexity may lead platforms to adopt overly cautious approaches to avoid liability. These challenges can undermine the effectiveness of labeling. In turn, this may create a situation where genuine content is dismissed as fake, eroding trust in shared reality. The case of BJP MP Dinesh Lal Yadav “Nirahua” highlights this issue: after a video surfaced showing him blaming overpopulation for unemployment, he dismissed it as AI-manipulated.

Finally, embedding tools like watermarks, metadata recording, or hashing can compromise users’ privacy and anonymity by requiring access to personal data. Online anonymity and privacy are important protections and should not be equated with malicious intent. For instance, they offer safety to survivors of domestic violence, sexual abuse, or LGBTQ+ individuals in hostile environments, allowing them to seek support securely. Broad tracking measures risk treating all users as potential offenders, shifting the burden of proof onto individuals. Further, assigning identifiable and unique identities does not always ensure accountability, and anonymity does not imply non-compliance with the law.

Reimagining liability

Synthetic media governance must be context-specific, as the legality and appropriateness of AI-generated content often depend on how and where it is used. A precedent from one case cannot be indiscriminately applied across all situations. For example, a teacher using generative AI to create synthetic media for an educational history lesson involving hate speech might not be acting unlawfully, given the context and intent. However, if someone uses generative AI to disseminate speech that incites violence, the implications are far more serious. A context-sensitive regulatory framework would be proportional to the potential impact. This requires developing evidence-based risk classifications and harm principles through collaborative processes.

Based on collaboratively developed risk classification and harm principles, codes and standards should be created and made mandatory for integration across the AI system lifecycle. For example, AI systems should incorporate safety codes and security standards as non-negotiable baseline requirements, regardless of their apparent risk profile. Developers should implement progressively stronger protective measures as potential harm increases across dimensions such as scope, severity, and probability. This risk-calibrated approach ensures proportional safeguards: high-risk applications require robust guardrails, including transparency mechanisms and model testing, while over-compliance does not burden lower-risk innovations.

Liability should trigger upon non-compliance with foundational and subsequent necessary safeguards. This is important because as much as one-third of generative AI tools enable intimate media creation, with technology advancing to produce 60-second synthetic videos from a single image in under half an hour. Compliance and monitoring should be overseen by an independent oversight body comprising government officials, representatives from civil society, academics, and subject matter experts. This would produce annual reports that would be available for public scrutiny. Such oversight mechanisms would enhance transparency, build user trust, and provide a precise understanding of generative AI’s capabilities.

Beyond oversight, the framework must also acknowledge power imbalances between platforms and users, particularly in the aftermath of harms. To mitigate this, civil society should be legally allowed to support and represent affected individuals in claiming compensation or reparation. To ensure adequate funding for such compensation, policymakers should promote market-based mechanisms such as insurance or liability pools. These solutions create dedicated financial reserves for AI-related claims through specialized insurance products paid by developers or collective pooling arrangements among developers and deployers. Such mechanisms distribute risk across the ecosystem and ensure compensation even when individual entities lack resources to address large-scale harms.

Developing a collaborative framework

Relying solely on voluntary commitments creates significant problems, as it fails to establish binding obligations and leaves AI systems’ operations opaque. The proposed Indian AI Safety Institute (AISI) should play an active role in facilitating an iterative, collaborative process for generative AI governance by involving civil society, academics, industry, and experts. The AISI should conduct empirical evaluations of AI models and systems to develop safety standards and establish benchmark tests focused on explainability, interpretability, and accountability. Based on these collaboratively designed standards, stakeholders can develop practical codes and frameworks that promote innovation and mitigate risks by guiding relevant platforms, such as generative AI platforms, deployers, and social media platforms, to comply with best practices.

Further, the AISI can leverage both domestic and international partnerships to strengthen generative AI governance as the technology evolves, without imposing rigid rules that could negatively impact innovation. Such collaborations would allow AISIs to share information, enabling swift responses to harms that emerge in one jurisdiction before they spread to others. India should therefore empower its AISI not only to evaluate the implications of AI models and systems locally but also to actively engage in a global network of AISIs. This globally coordinated approach would provide early indications of emerging threats and support adaptive, context-sensitive governance in the rapidly evolving landscape of generative AI.

Moreover, given the diverse applications of synthetic media, from entertainment to political misinformation, clearly defining permissible and impermissible uses in collaboration with relevant stakeholders is essential. The emerging definition of synthetic media, which now includes voice cloning, full-body manipulation, and text-to-image generation, poses challenges for effective regulation. Without clear, collaboratively established boundaries, there is a risk of stifling legitimate creative expression while failing to adequately address harmful uses, resulting in regulatory inconsistencies across jurisdictions. Insights from these efforts can help shape a co-regulatory framework that protects users while supporting technological progress.

Authors

Asheef Iqubbal
Asheef Iqubbal is a technology policy researcher at CUTS International, a research, advocacy, and capacity-building organization headquartered in Jaipur, India. He leads research-based advocacy initiatives focused on AI governance, privacy, liability, and data protection across sectors including e-c...

Related

Analysis
Synthetic Media Policy: Provenance and Authentication — Expert Insights and QuestionsMay 2, 2025
Perspective
To Craft Effective State Laws on Deepfakes and Elections, Mind the DetailsApril 22, 2025

Topics