Transparency Must be a Cornerstone of the Digital India Act

Aahil Sheikh / Apr 23, 2024

Rajeev Chandrasekhar, Minister of State in the Ministry of Skill Development and Entrepreneurship and Electronics and Information Technology of India, at Digital India futureSKILLS summit in Guwahati India on Thursday, February 15, 2024. Shutterstock

In March 2023, Mr. Rajeev Chandrasekhar, India’s Minister of State for the Ministry of Electronics and Information Technology (MEITY) announced the proposed Digital India Act (DIA) to replace the Information Technology Act (IT Act), which was enacted in 2000. Surveying a digital landscape that has changed significantly since the IT Act became law, MEITY says the DIA is a new law for a new generation. It is built on the foundations of the Digital India Goals 2026 (MEITY, 2023), which envision India as a ‘significant trusted player in the global value chains,’ among other goals, and seeks to ‘ensure Indian Internet is Open, Safe and Trusted and Accountable’ while protecting citizens’ rights and addressing risks from emerging technologies. DIA could also potentially impact the ‘safe harbor’ provided to all online intermediaries.

In essence, India is joining the increasing number of countries that are looking at online platforms with growing skepticism and aim to enforce online safety through new regulations. The Digital Services Act (DSA) in the European Union and the Online Safety Act in the United Kingdom are just two examples of such legislation. The DIA is an attempt to protect the 850 million Indians on the Internet while balancing the tenets of Digital India, which include accountability, a dedicated adjudicatory mechanism for online civil and criminal offenses, developing a unified cyber jurisprudence, and more.

India is just one of the countries heading into elections this year, and while the DIA is most likely shelved till after the 2024 General Elections, it could form an essential cornerstone of how online spaces are governed in the country, especially for tackling mis-and disinformation and keeping vulnerable minority groups safe. That said, an essential component of online safety is transparency. If India seeks to foster a truly safer online experience for its population, it must account for virtues of openness and accountability on how harms occur and are prevented across online spaces in India.

A new IT law is the need of the hour

With Jio’s – the country’s largest telecom provider – emergence in 2016, the number of Indians who have access to the Internet has skyrocketed. With an increase in users comes an increase in risk. In 2023, the National Human Rights Commission of India noted that the circulation of Child Sexual Abuse Material (CSAM) has increased by 250 to 300 percent on social media in India, with about 450,000 documented cases of CSAM in 2023 alone. According to Mr. Chandrashekhar, the current IT Act is inadequate for such a challenge, despite amendments in 2022 to hold intermediaries liable.

CSAM is just one example where the current Indian tech policy is lagging behind global standards. Combine this with an absence of consolidated regulatory requirements on risk assessments of automated decision-making systems (ADMS), and a lack of a harmonized institutional enforcement agency and dedicated adjudicatory mechanism, and it becomes obvious that the IT Act needs a successor for a new age. Indian citizens secured their first Digital Personal Data Protection Act in 2023 (though not without its fair share of criticism).

With the passing of a crucial data protection law and the proposal of a larger technology regulation overhaul looming large, it seems the Government of India has recognized the rationale for replacing an aging regulatory regime and is now moving fast to institute new rules.

Learning from the past: how existing digital regulations can inform the DIA

The deliberation and consultation process is a crucial element of the democratic policy-making process. Discussions within the Parliament by all members allow a law to be truly representative of all people. Taking it one step further, by opening it up to the public, the Government can assimilate what activists, civil society organizations, and academia highlight about the law. The primary advantage of this multistakeholder process is that it strengthens the law, contributing to its longevity and ability to proactively address any future harms.

The lessons from the Digital Personal Data Protection Bill of 2023 (DPDB 2023) might act as a cautionary tale. The Bill was passed within 52 minutes with the involvement of merely nine Members of Parliament in the Lok Sabha (the lower house of the Indian Parliament). Moving to the Rajya Sabha (the upper house), the legislation was passed after just one hour of debate with seven Members speaking on the bill. Several issues with the DPDB 2023 were highlighted by the Internet Freedom Foundation, which noted problems with the hurdles encountered during the consultation process, the lack of involved debates for the legislation, as well as issues with the Bill itself. These include sweeping powers granted to the Union Government, such as appointing members of the Data Protection Board, along with extensive exemptions granted to the Government.

Couple these with issues about surveillance and non-consensual data collection with the DigiYatra program at Indian airports, and it becomes clear that the DIA must follow a better path. Detailed definitions, avoiding contradictory clauses within the law, and empowering independent agencies without the overshadowing involvement of government authorities can make the upcoming law far more resilient to abuse.

What does the proposal say so far on transparency?

The proposed DIA, which was first discussed in March 2023, sets the notion of transparency in the context of algorithmic transparency. Divij Joshi, a Mozilla Tech Policy Fellow who worked on Trustworthy AI, created the AI Observatory, which documents cases of ADMS in India and their effects. A quick look through the algorithms deployed in the country and their impact validates the need for algorithmic accountability and transparency. However, the domain of online safety goes beyond the physical space where algorithms deny food to the poor. The case of Indonesia, where image generation tools such as Midjourney played a role before the elections, serves as a precedent for what generative AI can do. The current laws in India are inadequate for this rapidly developing field.

Hence, when the DIA proceeds to the discussion and debate stage, it must account for the algorithms that make an impact offline (such as those governing the provision of benefits, facial recognition, etc.) as well as those affecting online spaces (fabricated videos, images, recommendation systems, etc.) holistically. The term ‘algorithm’ must not be provided a blanket definition, but must be juxtaposed within the different scenarios and contexts in which algorithms will be deployed. Precise definitions with examples of applicability should be established if algorithmic systems are to be transparent. While the proposal mentions ‘periodic risk assessments by digital entities’ and also states the inclusion of ‘AI based ad-targeting, content moderation’ systems, these should be framed within the context of safety, considering the potential of such systems to foster violent and harmful content. For example, the UK’s Online Safety Act considers algorithmic risk assessments in the context of children and other groups to protect them from unsafe content.

The need for transparency reporting

Laws such as the DSA mandate all online platforms operating within the EU to publish transparency reports frequently to disclose information on how each platform engages with harmful content, figures about their content moderation practices, orders from member states to remove content and their reasoning, the number of content moderators and their language expertise, etc. Currently, Rule 4(1)(d) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 calls for a monthly compliance report disclosing information on complaints received, actions taken, content removal, automated tools, or ‘any other relevant information’ as may be specified. Here, the DIA has an opportunity to build upon what exists in the Indian context while borrowing elements of robust transparency reporting as enshrined in other regulations across the globe.

If we take how Meta publishes its transparency report for Instagram under the DSA, we can take notice of sections on the types of reported illegal content, the number of government orders to provide more information concerning them, the median response time, metrics on content removal by humans versus automated tools, and more such details. Here, special attention should be drawn to the section on languages. Transparency reports filed in compliance with the DSA provide information on the number of reviewers who speak each official EU language and thus are capable of interpreting and acting upon content in a specific language. A consistent criticism levied against tech giants such as Meta has been that they lack content moderators who speak languages besides English. In the case of India – which is home to not just Instagram, but Meta’s largest user base on Facebook and WhatsApp – the company reportedly lacked moderators who speak essential languages such as Hindi and Bengali. This revelation came after the disclosure of the Facebook Files, which revealed how the lack of specific classifiers and moderators speaking those languages led to hate speech and violence in the country.

This is a problem across multiple platforms. An investigation from Access Now and Global Witness published this month – at the start of voting in India’s elections – argues that YouTube was unable to detect misleading ads that violated the platform’s terms of service. As it stands, the ads were in English, Hindi, and Telegu. The leads behind the investigation argue that similar content propagating disinformation in English and Spanish ahead of the US midterm elections in 2022 was tackled appropriately in line with YouTube’s policy, so the failure in the Indian context implies selective moderation. While Google released a statement that said its enforcement policies would have intercepted the disinformation content in question, even if it passed the initial technical check, this anecdote serves as another reason why language transparency is crucial in India.

With the DIA, India can champion a more diverse and inclusive content moderation space within tech companies by enacting transparency requirements upon platforms to disclose the exact number of moderators they have who speak different languages, besides English. With more than 700 languages in India, online platforms must account for the linguistic diversity in the country and reflect the same in their content moderation methods.

The ramifications of delaying the DIA

As mentioned above, the DIA will not be tabled in the Parliament for discussions till after the 2024 Indian General Elections. While it will naturally take some time to iron out the details of the law, this delay might be a cause for concern, as voiced by numerous experts. For example, Kunal Sharma, Partner at Singhania and Co. states that the delay can harm the digital economy, online enterprises, and potential risks to India’s international commitments such as the World Trade Organization’s Agreement on E-Commerce. An uncertain regulatory environment is the common factor tying all these together. Other concerns include the urgency for enforcement of online safety standards. While ‘what if’ scenarios do not pan out well, it is worth considering what would the election landscape look like had the country been prepared with a law such as this one. For context, India has been noted to be at the highest risk of mis-and disinformation, according to the World Economic Forum’s 2024 Global Risk Report. Furthermore, as the problem of deepfake pornography proliferates, affecting celebrities and politicians alike, the current regulatory landscape proves to be inadequate.

The DIA can act as a remedy to a plethora of new-age maladies affecting online spaces. While consultation and effective discussion are crucial to the process, a delay can mark the beginning of a long road where the Government will attempt to keep pace with international standards and malevolent actors online. By crediting extensive consultations and the lack of time as the reason for the delay, the Government must abide by transparency and issue a timeline for their proceedings, without hampering the consultation process.

Transparency online and offline

Ushering in a new set of tech regulations for modern times is a gigantic task for any country, and given India’s size and unique set of problems, the path ahead will be long and arduous. With the sunset of the IT Act comes the potential for great change.

That being said, one cannot preach and demand transparency from platforms without seeking the same from the agencies responsible for framing these laws. Principles of openness, accountability, and safety should extend to engagement between civil society and the government. As noted by the Internet Freedom Foundation (IFF), the creation and enactment of the DIA is a multistakeholder process that requires views from diverse perspectives collected through open and deliberate public consultations. Comparing the timeline of Europe’s DSA and the UK’s Online Safety Act from formulation to adoption, IFF states that the proposed DIA should have a wider window for appropriate debate and discussion than what is usually afforded to such processes. While the Right to Information can empower citizens to seek more information about the process and timeline of the proposed DIA, the recent weakening of this Right may negatively impact the broader interpretation of transparency in every field of public life in India.

With a long way to go before deliberations and public consultations resume, the DIA has the potential to make Indians on the Internet safer, providing a blueprint for other countries in Southeast Asia to follow. To abide by the promise it sets out for itself of making India a global leader, the new law must adhere to principles of transparency and accountability – offline and online, platforms and governments alike.


Aahil Sheikh
Aahil Sheikh is currently pursuing his Master's in Digital, New Technology and Public Policy at Sciences Po, Paris. Passionate about making online spaces safer and more equitable, Aahil is interested in exploring the intersection between internet culture, online spaces and user safety. He holds a Ba...