The Many Questions About India's New AI Advisory

Amber Sinha / Mar 6, 2024

Amber Sinha is a fellow at Tech Policy Press.

India's Minister for Electronics and Information Technology, Rajeev Chandrasekhar, addresses the AI Safety Summit at Bletchley Park, November 2023. UK Government, CC BY 2.0, via Wikimedia Commons.

In a rushed foray into AI regulation, the Indian government issued an advisory on March 1, 2024, asking platforms to seek the “explicit permission” of the Ministry of Electronics and Information Technology (MeitY) before deploying any “unreliable Artificial Intelligence model(s) /LLM/Generative AI, software(s) or algorithm(s)” for “users on the Indian Internet.” Additionally, it asks intermediaries or platforms to ensure that their systems do not permit any bias or discrimination or threaten the integrity of the electoral process and label all synthetically created media and text with unique identifiers or metadata so that it is easily identifiable. Notably, this advisory follows a recent online exchange where the Minister of State for IT, Rajeev Chandrasekhar, called Google Gemini’s response to the question, “Is Modi a fascist?” a direct violation of intermediary liability regulations and criminal law provisions.

The advisory should be seen in light of an earlier one issued in November 2023. That advisory asked significant social media intermediaries– including platforms such as Facebook, Instagram, and YouTube– to take down deep fake content within 24 hours of notification from an aggrieved person. Conditional on compliance with the takedown requests, the advisory made a safe harbor available to platforms as Internet Intermediaries under Section 79(1) of the Information Technology Act, 2000

Both advisories appear to have been issued under Rule 3(1)(b) of Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (IT Rules, 2021), which requires platforms to make reasonable efforts, including informing users not to host content that spreads misinformation or impersonates another person. The November 2023 advisory was followed by another advisory in December 2023, which asked all internet intermediaries to comply with the IT Rules, 2021, in particular provisions of Rule 3(1)(b). The new advisory on AI makes a direct reference to the December advisory.

The immediate response to the latest AI advisory was sharp criticism from various quarters and questions regarding its legal validity. Several startups voiced concerns that it “kills startups trying to build something in the field and only allows giant corporations who can afford additional resources for testing, and government approval.” In a tweet on March 4th, Chandrasekhar clarified that the advisory only applied to “significant platforms” and will not apply to startups. In a subsequent tweet on the same day, Chandrasekhar provided further clarification. He invoked potential liabilities that Internet intermediaries face under India’s criminal laws and the loss of safe harbor protections when unlawful content is involved, and stated that platforms deploying “lab level /undertested AI platforms onto public Internet and that cause[s] harm or enable unlawful content” could protect themselves from such liabilities by seeking prior permission from the government.

The nature of this ill-conceived and poorly drafted advisory is reminiscent of the short-lived draft of the National Encryption Policy, which was released in 2015 and withdrawn in a month’s time, with the government blaming its poor drafting on a junior officer. However, unlike then, the government’s response has been bullish rather than contrite. In fact, the second clarificatory tweet by Chandrashekhar sounded annoyed, blaming “noise and confusion being created by those who [should] know better,” all complete with a man-shrugging emoji.

Despite these two clarifications, the scope and effect of this advisory remain unclear. The first clarification tries to address questions of scope by limiting the application to ‘significant platforms.’ ‘Platform’ is not a defined term anywhere in the Information Technology Act, or the IT Rules, 2021. We can only assume that the tweet was referring to a ‘significant social media intermediary’ which is defined under the IT Rules, 2021. Does this mean that all other intermediaries other than significant social media intermediaries are excluded from the application of the advisory? It is not entirely clear.

The next issue is about the legal validity of the advisory itself. The text of the advisory makes no reference to laws or regulations from which it may be drawing its enabling power to demand prior approval from the government. Advisories, unlike notifications, have no statutory force. They are merely clarifications that inform the public about various provisions of the law that already exist.

The Minister’s second clarifying tweet seems to imply that the advisory simply repeats already existing penal provisions and obligations applicable to intermediaries, and all it does operationally is to offer a way for platforms to protect themselves through prior approval. If that is the case, what legal provisions enable the government to offer additional protections to platforms supposedly in violation of the law is not clear. Also, if that is indeed the case, why are startups and other smaller companies not within the scope of ‘significant social media intermediaries’ being denied this insurance?

Finally, the advisory uses general and vague terms such as “undertested” and “unreliable” AI, which are not defined in any law or regulation, nor have any clearly accepted meaning in scientific literature. This makes meaningful compliance with the advisory almost impossible.

The legality of the IT Rules, 2021, is also uncertain. At the time of writing this article, 17 petitions were before different courts in India, challenging the constitutional validity of the rules. In light of these pending legal challenges, it would be a good idea for the government to be cautious in implementing its provisions rather than seek to extend its scope to newer domains such as AI.


Amber Sinha
Amber Sinha works at the intersection of law, technology and society, and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, technology, and society. He is currently a Senior ...