India’s New IT Rules on Deepfakes Threaten to Entrench Online Censorship
Sarthak Gupta / Nov 7, 2025The Indian Ministry of Electronics and Information Technology (MeitY) proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Act Amendments) in late October, seeking to address the growing phenomenon of “synthetically generated information.” The stated objective of the Impugned Amendments is to mitigate the spread of digitally manipulated or AI-generated content, commonly known as deepfakes, that may distort reality, mislead citizens, or cause reputational harm. The IT Act amendments are presently undergoing public consultation and will be officially released thereafter.
However, beneath this technocratic justification lies a deeper constitutional tension. The IT Act Amendments, by design and scope, extend the State's regulatory arm into online discourse. By imposing vague obligations on intermediaries and failing to delineate clear boundaries for enforcement, the IT Act Amendments risk undermining the fundamental right to freedom of speech and expression guaranteed under Article 19(1)(a) of the Constitution of India. What aims to be a safeguard against misinformation may, in practice, institutionalize a system of preemptive censorship.
The IT Act Amendments, under Rule 2 (1) (w), introduce the concept of “synthetically generated information” as a new category of content subject to regulation. This term is defined expansively to include any information “generated, modified, or altered through algorithmic, computational, or artificial intelligence processes.” In effect, this includes not only malicious deepfakes but also benign, creative, and expressive uses of artificial intelligence, such as digital art, parody, satire, and political commentary.
The legal consequences of this overbroad classification are profound. They collapse the distinction between ‘harmful deception’ and ‘legitimate creativity’, thereby subjecting both to the same regulatory standards. This overbreadth is constitutionally suspect, as the Indian Supreme Court recognised in Shreya Singhal v. Union of India that laws that are vague in scope or ambiguous in application are prone to misuse and cannot withstand scrutiny under Article 19(2), which permits only narrowly tailored restrictions on speech.
The IT Act Amendments’ most consequential feature lies in the obligations it imposes on intermediaries, social media platforms, search engines, and online service providers under the new Rule 3(3). Under the parent statute, the Information Technology Act, 2000, intermediaries have historically enjoyed “safe harbor” protection under Section 79, which insulates them from liability for user-generated content, provided they act as neutral conduits and remove unlawful material upon receiving actual knowledge through proper legal notice. Rule 3(3) requires intermediaries to proactively identify, verify, and label synthetically generated content. This shift transforms platforms from passive facilitators of speech into active regulators of expression. They are now expected to pre-screen all user uploads, assess whether a post contains AI-generated material, and label or remove it as appropriate. In practice, this imposes a duty of constant surveillance, effectively converting private platforms into instruments of state censorship. Given the severe penalties for non-compliance, it is likely that intermediaries will inevitably over-comply, taking down any content remotely questionable. This dynamic creates a chilling effect, where lawful speech is curtailed not by explicit state command but also by the fear of sanction.
A core constitutional infirmity of the IT Act Amendments is their lack of procedural safeguards. There is no clear mechanism for users to appeal wrongful removal or labelling of their content, nor is there any independent adjudicatory body to oversee enforcement. The decision-making power rests entirely with the executive, specifically MeitY, creating a concentration of rulemaking, enforcement, and adjudicatory functions within a single authority. This structure is inconsistent with the principles of natural justice and separation of powers. The absence of independent review mechanisms allows for arbitrary enforcement and politically motivated targeting. In addition, the IT Act Amendments contain no explicit exceptions for satire, news reporting, academic research, or artistic expression, forms of speech that lie at the heart of democratic participation. Without such exemptions, even commentary critical of the government or public figures could be subject to labelling or removal on the pretext of being “synthetic.”
Earlier this year, in May 2025, the United States enacted the Tools to Address Known Exploitation by Immobilising Technological Deepfakes on Websites and Networks (TAKE IT DOWN) Act, 2025, offering a markedly different approach to regulating synthetic media. The Take It Down Act demonstrates that it is possible to confront the harms of artificial intelligence–generated content through narrowly tailored legislation that preserves constitutional guarantees of free expression. It criminalizes the intentional publication of non-consensual intimate visual depictions, including both authentic and AI-generated (“digital forgery”) content, where there is intent to cause harm or actual harm results, under Section 2(a) of the TAKE IT DOWN Act. It requires proof of both consent and intent to cause harm, and it embeds clear exceptions for uses in education, satire, journalism, and matters of public concern under Section 2(b). Enforcement is entrusted to the Federal Trade Commission under Section 3(b) of the TAKE IT DOWN Act, an independent regulatory body subject to judicial oversight, thereby ensuring procedural accountability and limiting executive overreach.
By contrast, India’s IT Act Amendments adopt a far more expansive and pre-emptive regulatory model. It treats all forms of synthetically generated content as inherently suspect, irrespective of context, intent, or potential harm. This very framework will lead to conflating malicious deepfakes with legitimate artistic, journalistic, or political expression. Such a preventive approach fails to satisfy the proportionality test articulated by the Indian Supreme Court in Modern Dental College v. State of Madhya Pradesh and Justice (Retd.) Puttaswamy v. Union of India, which mandates that restrictions on fundamental rights must be (i) necessary for achieving a legitimate aim, (ii) the least restrictive means available, and (iii) proportionate to the harm sought to be addressed. The IT Act Amendments, by imposing blanket restrictions without individualised assessment or procedural safeguards, violate all the last two limbs of this test. Whereas the American model aligns with a rights-based philosophy of targeted intervention, India’s IT Act Amendments veer toward a regime of prior restraint, transforming the regulation of synthetic information into a mechanism for control rather than protection.
The Indian Supreme Court has consistently held that prior restraints on speech are presumptively unconstitutional, save for exceptional circumstances subject to judicial scrutiny, as affirmed in Brij Bhushan v. State of Delhi, Express Newspapers v. Union of India, and R. Rajagopal v. State of Tamil Nadu. Beyond its immediate legal infirmities, the IT Act Amendments signify a broader philosophical shift in India’s digital governance. It represents the migration from regulation for transparency to regulation for control. Under the pretext of protecting users from misinformation, the State acquires an expansive mandate to dictate the authenticity of online expression.
By transforming platforms into compliance agents and citizens into cautious participants, the IT Act Amendments foster an environment of self-censorship, a digital public sphere governed not by open discourse, but by anticipatory compliance. The cumulative effect of these changes is the erosion of India as a democracy committed to the free exchange of ideas. In a polity where digital media constitutes the principal arena of civic engagement, restricting synthetic expression amounts to restricting modern political participation itself.
Authors

