In February, India, along with France, co-hosted the AI Action Summit held in Paris. At the end, it was announced that the next edition will be held in India. In its naming, priorities, and focus, the summit witnessed a clear shift from “safety” to “innovation” as the principal theme in artificial intelligence discourse. This move aligns with India’s lax regulatory stance on AI governance, even in high-risk areas like healthcare and surveillance-driven technologies such as facial recognition technology.

In the upcoming summit, this shift will enable the Indian government to steer discussions toward innovation, investment and accessibility while avoiding scrutiny over its weak legal protections, which create an environment conducive to unregulated technological experimentation.

Shortly after the introduction of Chinese start-up DeepSeek’s R1 model – which upended assumptions about large language models and how much it might cost to develop them – the Indian Ministry of Electronics and Information Technology announced plans to develop indigenous foundation models using Indian language data within a year and invited proposals from companies and researchers under its IndiaAI Mission.

While local development in these areas is still in the early phase, the domain of AI that has already seen widespread adoption and deployment in India is facial recognition technology. As India contemplates a sustained push toward AI development and will likely seek to leverage its hosting of the next AI Summit for investments, it is instructive to look at how it has deployed and governed facial recognition technology solutions.

Understanding Facial Recognition Technology

Facial recognition technology is a probabilistic tool developed to automatically identify or verify individuals by analysing their facial features. It enables the comparison of digital facial images, captured via live video cameras (such as CCTV) or photographs, to ascertain whether the images belong to the same person.

Facial recognition technology uses algorithms to analyse facial features, such as eye distance and chin shape, creating a unique mathematical “face template” for identification. This template, similar to a fingerprint, allows facial recognition technology to identify individuals from photos, videos, or real-time feeds using visible or infrared light.

Facial recognition technology has two main applications: identifying unknown individuals by comparing their face template to a database (often used by law enforcement) and verifying the identity of a known person, such as unlocking a phone. Modern facial recognition technology utilises deep learning, a machine learning technique.

During training, artificial neurons learn to recognise facial features from labeled inputs. New facial scans are processed as pixel matrices, with neurons assigning weights based on features, producing labels with confidence levels. Liveness checks, like blinking, ensure the subject is real. Still, facial recognition technology faces accuracy challenges – balancing false positives (wrong matches) and false negatives (missed matches). Minimising one often increases the other. Factors like lighting, background and expressions also affect accuracy.

Over the past seven years, facial recognition technology has seen widespread adoption in India, especially by the government and its agencies. This growth has coincided with debates surrounding Aadhaar (the national biometric ID system), frequent failures of other verification methods, a rise in street surveillance, and government efforts to modernise law enforcement and national security operations.

In this review, I have surveyed the range of facial recognition technology deployment across sectors in India, both in public and private service delivery. This adoption tells the story of an exponential rise in the use of FRT in India, with barely any regulatory hurdles despite clear privacy and discrimination harms.

Locating India’s regulatory approach

While efforts toward regulating AI are still in their infancy, with a handful of global regulations and considerable international debate about the appropriate approach, regulatory discussions about facial recognition technology predate them by a few years and are a little more evolved.

Facial recognition technology systems can produce inaccurate, discriminatory, and biased outcomes due to flawed design and training data.

A Georgetown Law study on the use of facial recognition technology in the US showed disproportionate impacts on African Americans and tests revealed frequent false positives, particularly affecting people of color.

In 2019, the UK’s Science and Technology Committee recommended halting facial recognition technology deployment until bias and effectiveness issues are resolved. The UK government countered the report by stating that the existing legal framework already offered sufficient safeguards regarding the application of facial recognition technology.

Civil society organisations have been demanding bans or moratoriums on the use and purchase of facial recognition technology for years, most notably after a New York Times investigation in 2019 revealed that more than 600 law enforcement agencies in the US rely on the technology provided by a secretive company known as Clearview AI.

An impact assessment commissioned by the European Commission in 2021 observed that facial recognition technology “bear[s] new and unprecedentedly stark risks for fundamental rights, most significantly the right to privacy and non-discrimination.”

The European Union and UK offer regulatory models for facial recognition technology in law enforcement. The EU’s Law Enforcement Directive restricts biometric data processing to strictly necessary cases.

While initial drafts of the EU’s AI Act banned remote biometrics – such as the use of facial recognition technology – the final version has exceptions for law enforcement. In the UK, the Data Protection Act mirrors Europe’s General Data Protection Regulation (GDPR), and a landmark court ruling deemed police facial recognition technology use unlawful, citing violations of human rights and data protection, and the technology’s mass, covert nature.

The EU’s AI Act, while not explicitly banning discriminatory facial recognition technology, mandates data governance and bias checks for high-risk AI systems, potentially forcing developers to implement stronger safeguards. The GDPR generally bans processing biometric data for unique identification, but exceptions exist for data made public by the subject or when processing is for substantial public interest.

In Europe, non-law enforcement facial recognition technology often falls under these exceptions. As per EU laws, facial recognition technology use may be permitted under strict circumstances in which a legislator can provide a specific legal basis regulating the deployment of facial recognition technology that is compatible with fundamental rights.

US Vice President JD Vance’s rebuke against “excessive regulation” of AI at the Paris Summit in February telegraphed a lack of intent for the current US federal government to regulate AI. However, there are numerous state-level regulations in operation in the US.

Canada’s Artificial Intelligence and Data Act (AIDA) follows the EU model of risk regulation. Countries like South Korea have taken a more light-touch approach, with Seoul’s AI Basic Act including a smaller subset of protections and ethical considerations than those outlined in the EU law. Japan and Singapore have explored self-regulatory codes rather than command and control regulation.

The Indian Supreme Court’s Puttaswamy judgment, which upheld a right to privacy, outlines a four-part test for proportionality to test whether state actions violate fundamental rights: a legitimate goal, suitable means, necessity (meaning there are no less restrictive alternatives), and balanced impact on rights.

Facial recognition technology applications, like those that use the technology to mark attendance and carry out authentication, often have less intrusive alternatives, suggesting they fail the necessity test. Street surveillance using facial recognition technology inherently involves indiscriminate mass surveillance, not targeted monitoring.

India’s newly legislated Digital Data Protection Act, whose rules are currently being framed, permits the government to process personal data without consent in certain cases. Section 17(2) grants a broad exemption from its provisions for personal data processing, exempting state entities designated by the Indian government for reasons as broad as sovereignty, security, foreign relations, public order, or preventing incitement to certain offenses.

In India, the primary policy document on facial recognition technology is a Niti Aayog paper, “Responsible AI for All,” which anticipates that India’s data protection law will handle facial recognition technology privacy concerns. However, it lacks detailed recommendations for ethical facial recognition technology use. It suggests the government should not exempt law enforcement from data protection oversight. It remains to be seen whether this recommendation will be followed, but this alone would be insufficient protection.

Data minimisation, a key data protection principle that recommends the collection only of such information as is strictly necessary, restricts facial recognition technology by preventing the merging of captured images with other databases to form comprehensive citizen profiles.

Yet, tenders for Automated Facial Recognition Systems (AFRS), to be used by law enforcement agencies, explicitly called for database integration, contradicting data minimisation principles.

India’s lenient approach toward facial recognition technology regulation, even as there is widespread adoption of the technology by both public and private bodies, suggests a pattern of regulatory restraint when it comes to emerging digital technologies.

Rest of World recently reported on an open-arms approach that India has taken to AI, with a focus on “courting large AI companies to make massive investments.” As a prime example, both Meta and OpenAI are seeking partnerships with Reliance Industries in India to offer their AI products to Indian consumers, which would be hosted at a new three-gigawatt data center in Jamnagar, Gujarat.

These investments in India need to be seen in the context of a number of geopolitical and geoeconomic factors: data localisation regulations under India’s new data protection law, the negotiating power that the Indian government and the companies close to it possess by leveraging the size of its emerging data market, how these factors facilitate the emergence of domestic BigTech players like Reliance, and most importantly, the Indian government’s overall approach toward AI development and regulation.

It was earlier reported that the much-awaited Digital India Act would have elements of AI regulation. However, the fate of both the legislation or any other form of regulation is, for the moment, uncertain.

As recently as December 2024, Ashwini Vaishnav, the Indian minister of electronics and information technology, stated in the Indian Parliament that a lot more consensus was needed before a law on AI can be formulated. This suggests that the Indian government currently has no concrete plans to begin work toward any form of AI regulation, and despite the widespread use of AI and well documented risks, will stay out of the first wave of global AI regulations.

Amber Sinha is a Contributing Editor at Tech Policy Press and incoming Executive Director of European Digital Rights (EDRi).