Home

The Landscape of Facial Recognition Technologies in India

Amber Sinha / Mar 13, 2024

Amber Sinha is a Tech Policy Press fellow.

Pete Woodhead, CC BY 2.0, via Wikimedia Commons

The last five years have witnessed an exponential rise in the use of facial recognition technologies (FRT) in India. The sectors in which such technologies have been deployed range from law enforcement to healthcare, and education to the food and beverage industry, indicating an uptake of FRT across different use cases, and clients from the public and private sectors.

This article considers key projects involving FRT in India and the relative regulatory vacuum in which they operate. It also compares Indian regulation and policymaking directions with other key jurisdictions such as the United States, the United Kingdom, and the European Union. These jurisdictions have traditionally been influential in policy discourses in India and have also seen increased uses of FRT.

What is FRT, and how does it work?

Facial recognition technology is a biometric identification technology, much like fingerprints and iris scans. Using local feature analysis algorithms, the technology analyses photographs and video to measure metrics such as the shape of the chin, the distance between the eyes, and other distinctive facial characteristics to create a mathematical sequence called a face template. The technology can use facial features to recognize individuals in inputs (photographs, videos, or real-time feeds) through the use of both visible light and infrared waves. This face template, much like a fingerprint biometric sequence, is the unique identifier of a person.

Broadly speaking, FRT can do two things. The first is to identify an unknown person by recording images of a person’s face, converting it to a face template, and running it against a database to see if one gets a hit. Law enforcement agencies use facial recognition technology in this manner. The second use is to verify the identity of a known person, where the image is authenticated against one known template, for instance, the facial recognition feature to unlock phones.

State-of-the-art FRT relies on machine learning techniques such as deep learning. In the training phase, the units collectively “learn” to recognize features such as edges and shapes corresponding to given labels. New inputs (facial scans) are treated as a matrix of pixels and passed through a series of statistical units termed “artificial neurons.” These units output numerical weights based on factors such as pixel color and density. Many layers of such interconnected units are provided with several labeled inputs, which form a heuristic through which the weights are adjusted.

When a new input is given, the units will output a high weight when they encounter features that were present in the training data. These weights are normalized and used to produce labels with a confidence number (e.g., this input is 95% a zebra and 60% a horse). These systems also typically include a liveness check, such as blinking or a series of head movements, to ensure the subject being identified is a real person.

FRT also involves a tradeoff between falsely identifying a face and failing to find a match for a face. Given the risk that a person’s face is wrongly identified, known as the false positive rate (the probability of a wrong match), many systems will try to minimize this as much as possible. But, in order to minimize false positives, systems then increase the false negative rate (the probability of failure to match a face). These are inherent issues with regard to the accuracy of any facial recognition technology. Other factors such as lighting, background, perspective, pose, and expressions also influence and can compromise the accuracy of FRT.

FRT use in India

In the last five years, the use of FRT has proliferated in India, particularly by the Indian state and its agencies. The use of FRT runs parallel to disputes over the use of Aadhaar (a biometric national ID system), high rates of failure of other modes of verification, an increase in street surveillance, and a government push to modernize domestic law enforcement and national security agencies.

While the story of the deployment of FRT by government in India has so far been one of start-stop, often with two steps forward followed by one step backward, what it does demonstrate is the intent of the state to use FRT across a wide range of services. Below, I cover a cross-section of these uses. Please see the table above for a detailed list of FRT uses in India.

Telecommunications and travel. The first significant discussion of FRT in India began in mid-2018 when the media reported that the Unique Identification Authority of India (UIDAI), the agency that administers Aadhaar, would ​​conduct a phased rollout of facial recognition starting with telecom companies. The use of FRT was positioned as a solution to address the failures in fingerprint and iris authentication. It was intended to be used in combination with another form of authentication, such as iris scans, fingerprints, or mobile-based OTP. Not only were telecom companies allowed to carry out FRT-based authentication, but a financial disincentive was attached by UIDAI that set a minimum target of 10% FRT-based authentication out of the total number of identity verifications.

However, the Department of Telecommunications paused the roll-out of FRT. There was a case concerning the legal validity of the Aadhaar project pending before the Supreme Court. The rollout plans were shelved after the Supreme Court placed significant limits on the private use of the Aadhaar infrastructure. Notably, the government would later circumvent the limitations posed by the court.

A few months later, the Times of India reported that pilot experiments for web check-ins using FRT had been carried out at airports in Bengaluru and Hyderabad, and would be introduced as an additional and non-mandatory feature called Digi-Yatra. Shortly after that, Digi-Yatra was also introduced at the New Delhi airport. So far, the intended use case of FRT seems limited to face verification comparing a probe image of a face with a specific image in a database.

However, aside from verification, the implementation of the system also suggested that primary data collection and linking identities of travelers to Aadhaar numbers is one of its key purposes. At the enrolment stage into the Aadhaar program, the data collected includes fingerprints, iris scans, and a passport-size photograph. This use of FRT projects to complement data collection is perhaps intended to compensate for the fact that images for the creation of face templates were not collected at the enrolment stage into Aadhaar, with the passport-size photo being a poor form of input data for FRT.

Policing. With the call for tenders to implement a centralized Automated Facial Recognition System (AFRS) launched by the National Crime Records Bureau (NCRB) on June 28, 2019, FRT rapidly moved from targeted verification to mass identification in India. The AFRS project’s primary purpose was “modernising the police force, information gathering, criminal identification, verification and its dissemination among various police organisations and units across the country.”

The goal was to create a centralized application and repository of photographs hosted at the NCRB Data Centre in Delhi, with remote access provided to all police stations nationwide. The AFRS would collect images from government databases across sectors such as passports, law enforcement, prison and criminal justice, women and child development, among others. It was unclear whether the database would only focus on suspects/criminals or create a photographic dossier on every citizen and non-citizen in India.

There were two key forms of FRT use that the original call envisaged. The first was that the officer in the field could snap a picture of a suspect and send it to their local station for FRT analysis. The second was integrating the system with a network of closed-circuit television (CCTV) cameras across the country. It would be obvious to anyone closely following the debate about FRT that the second form of usage would enable real-time mass surveillance. Responding to various questions about human rights abuse raised by civil society actors, in a revised call, the government ostensibly dropped the second purpose, clarifying that the project would “not involve the installation of CCTV camera nor will it connect to any existing CCTV camera anywhere.” However, in the same revised call another data source, “[s]cene of crime images/videos,” was added as input data for the repository, suggesting a possible conflict with the earlier assertion.

In parallel, between 2019 and 2023, several other state-level and city-level FRT law enforcement projects also emerged in Hyderabad, Chennai, Chandigarh, Uttar Pradesh, Uttarakhand, Bihar, Rourkela, Delhi, Jammu and Kashmir, Dharamsala, Odisha, Haryana, and Hyderabad (see Airtable). The FRT and CCTV projects in Hyderabad became the focus of Amnesty International’s Ban the Scan global campaign against FRT.

Public health. The pandemic also saw the introduction of untested thermal imaging technologies alongside FRT in India. In May 2020, Kerala received its first Thermal and Optical imaging camera with AI-powered facial recognition software to monitor people for fever while keeping social distance. Congress leader Shashi Tharoor had previously stated that it was a much-needed innovation in the district for inspecting at a safe distance and trying to isolate possibly sick people during the vast number of travelers entering the state through flights or trains. The use of FRT and thermal imaging to check masking and body temperatures became common in entrances to public spaces such as malls, hospitals, etc.

Welfare programs and public benefits. Another use gaining momentum during the pandemic is the deployment of FRT for the purposes of authentication for welfare programs to enable social distancing. The first major instance of this was the app released by the Megalaya government in 2021 to replace the existing pensioner verification process involving periodic visits to the Treasury Officer or Pension Disbursing Authority.

In the same year, the Union Government Department of Pension & Pensioners’ Welfare (DoPPW) launched a FRT-based app to verify pensioners and detect if they are alive. It relies on the UIDAI AadhaarFaceRD mobile application and facial database to issue life certificates. The Telangana Government went one step ahead, and its Department of Consumer Affairs, Food, and Civil Supplies took out a tender for the installation and maintenance of 17,500 ePoS devices (electronic point of sale devices), “compatible to incorporate photo matching and face recognition options.”

Education. Another trend emerging across sectors is the use of FRT to verify attendance. Originally introduced in educational institutions in order to aid social distancing during the pandemic and check for temperatures, it soon expanded to other sectors as diverse as National Highways and Railways. In the Education sector, FRT was also deployed during entrance examinationsfor verification as well as tracking of eyes to prevent cheating. In one bizarre instance in Maharashtra, FRT technology was deployed to ensure quality assurance of food provided to students.

Elections. Perhaps one of the more alarming uses of FRT has been in pilot experiments in voting. In Telangana again, the State Election Commission introduced FRT for authentication in e-voting experiments in Greater Hyderabad Municipal Corporation elections in early 2022. Piloted in 10 polling stations in the city, this was the first use of FRT in elections in India. Later, in 2023, FRT was used in a pilot case in the state assembly election in Karnataka. Under this system, every voter is required to install the 'Chunavana' mobile application and input their Elector’s Photo Identity Card (EPIC) number, located on the voter ID above the photo. Once the registered mobile number is entered, a one-time password (OTP) is generated, prompting the user to take a selfie through the app. This selfie is used as input data for verification at the polling booths. The stated purpose of the deployment of FRT was to cut down on queues, reduce manpower, and detect fraud. However, there was little clarity about what kind of immediate measures would be implemented to address false negatives.

In December 2023, the National Informatic Centre (NIC) circulated a call for proposals for the procurement and deployment of surveillance equipment, including drones and FRT, for monitoring election processes during union and state elections. The call details arrangements for live-streaming of the voting and counting procedures and the establishment of a “centralized command and control center" to oversee activities in real-time, aiming to "prevent irregularities and uphold law and order at polling stations during elections." A month later, upon directions from the Election Commission of India, NIC canceled this call.

Regulation of FRT in India and Possible Approaches

Facial recognition systems enable covert and remote mass authentication. This means that they work without providing notice of their existence and use, require no direct interaction with the subject, and intended deployments are usually not targeted to specific suspects, but designed to surveil everyone. Therefore, significant privacy and free speech concerns arise with any deployment of this technology for law enforcement purposes. However, despite this, the regulatory oversight of FRT in India remains sparse.

Right to privacy

In 2017, in the Puttaswamy judgment, the Supreme Court of India held the right to privacy to be a fundamental right under the Indian Constitution. While this right is subject to reasonable restrictions, these restrictions have to comply with a threefold requirement, namely (i) the existence of a law, (ii) legitimate state aim, and (iii) proportionality. Under this requirement, the deployment of FRT by the state, for law enforcement purposes or otherwise, constitutes an infringement of the right to privacy.

Yet, there is currently no legislative sanction for using FRTs by the State or any private entity. The NCRB has claimed that FRT has cabinet approval and, therefore, there is no need for any legislative or executive order sanctioning the establishment of the AFRS. However, cabinet approval is not a statutory enactment, and it does not confer any legislative authority for the use of facial recognition technology.

In contrast, in the EU, there are clear legislative provisions that allow for and limit the use of FRT. For example, Article 9 of the GDPR prohibits the processing of personal data revealing biometric data for the purpose of uniquely identifying a natural person. This prohibition, however, does not extend to data made public by the subject and where “processing is necessary for reasons of substantial public interest.” In most cases across the continent, the uses of FRT for non-law enforcement purposes fall under these exceptions.

The only dedicated policy document of any consequence focusing on FRT is a paper from Niti Aayog, a public policy think tank of the Indian government. “Responsible AI for All: Adopting the Framework – A Use Case Approach on Facial Recognition Technology” was part of their series of papers on Responsible AI. The paper expects that most of the privacy concerns from FRT would be addressed by India's data protection law (at that point, still in its consultation phases). Furthermore, the paper does not make exhaustive textual recommendations on how the legislation could ensure the ethical use of FRT.

It does make a minimal recommendation that the Central Government should not exempt law enforcement agencies from the oversight of the data protection law, as it can under Section 18(1)(a). However, even if the government followed the recommendation, it would do little to protect against the harms of FRT. One key way that data protection regulations can restrict the use of FRT is through the principles of data minimization. Sound application of purpose limitation requires that the images captured through FRT cannot be combined with any other database to create a 360-degree profile of a citizen. However, the current tenders for AFRS discussed above envisage the opposite and expressly request the conjoining of databases.

Usage Restrictions and Guardrails

FRT can also be utilized to control and stifle political opposition. This technology enables governments and law enforcement agencies to identify individuals participating in rallies or any form of political or social dissent, potentially subjecting them to surveillance to monitor their movements. The situation worsens when law enforcement agencies use technology that may reinforce their biases regarding dissent and criminal activities. These factors together create a chilling effect on free speech and expression and the freedom of assembly.

Here, proportionality is critical to consider. The content of the proportionality prong as articulated in the Puttaswamy judgment comprises (i) a ‘legitimate goal’ or proper purpose; (ii) ‘suitability,’ namely that the law must be a suitable means of furthering the aforesaid legitimate goal; (iii) ‘necessity,’ i.e., there must not be any less restrictive but equally effective alternative present; and (iv) ‘balancing,’ since the measure must not have a disproportionate impact on the right holder. The various use cases for FRT, including monitoring attendance and identity authentication, can be undertaken through measures that rely on less personal data or data that is not as sensitive in nature. Therefore, it can be argued that the deployment of FRT does not satisfy the necessity test listed above. Further, the use of FRT in street surveillance, by definition, translates into mass surveillance, which is indiscriminate as opposed to targeted surveillance.

Legislative and judicial developments in the EU and the UK provide an example of what regulatory restrictions on the use of FRT in law enforcement may look like. The Law Enforcement Directive in the EU allows for the processing of “biometric data for the purpose of uniquely identifying a natural person” only where strictly necessary or under narrowly defined circumstances. These restrictions provide a modicum of control to the burgeoning uses of FRT. Initial versions of the AI Act also outlawed remote biometric technologies; however, the final text appears to have wide exceptions for law enforcement purposes.

In the UK, the Data Protection Act has protections similar to the GDPR. However, the UK Court of Appeals, in the first landmark case globally on law enforcement use of FRT, held that the use of FRT by the South Wales Police was unlawful. The court held that the use of FRT by South Wales Police violated Article 8 of the European Convention on Human Rights and the Data Protection Act, and failed to comply with the Public Sector Equality Duty. While examining the covert and mass nature of FRT, the court concluded that the main feature of this technology is that it “enables facial biometrics to be procured without requiring the co-operation or knowledge of the subject or the use of force, and can be obtained on a mass scale.”

Prohibiting discrimination

Inaccurate, discriminatory, and biased decisions can arise from poorly designed and trained Facial Recognition Technology (FRT) systems. A study by the Center for Privacy and Technology at Georgetown Law found that public facial recognition disproportionately impacts African Americans due to a skewed training set used in developing the software. In addition, research on publicly available facial recognition systems reveals instances of false positives, where the technology incorrectly identifies a person's face with an image in the database. For example, in a 2018 test by the American Civil Liberties Union on Amazon's Rekognition tool, 28 members of the US Congress were inaccurately identified as individuals arrested for a crime, with a disproportionate impact on people of color.

In the UK, the Science and Technology Committee of the House of Commons has expressed concerns about the technology's misuse and recommended withholding the deployment of automatic facial recognition technology until issues related to effectiveness and potential bias are fully addressed. The EU’s AI act, though not directly prohibiting discrimination with respect to FRT, includes several provisions that may compel vendors to build in more safeguards. For example, Article 10 requires high-risk AI systems to implement “appropriate data governance and management practices,” and in particular, “examin[e] in view of possible biases that are likely to affect the health and safety of persons, negatively impact fundamental rights or lead to discrimination prohibited under Union law, especially where data outputs influence inputs for future operations.”

Authors

Amber Sinha
Amber Sinha works at the intersection of law, technology and society, and studies the impact of digital technologies on socio-political processes and structures. His research aims to further the discourse on regulatory practices around the internet, technology, and society. He is currently a Senior ...

Topics