In 2018, researchers from the United States conducted a study which found that facial recognition technology worked profoundly differently on different demographic groups. When the software saw “lighter-skinned” men, it correctly identified their gender 99% of the time. But when it encountered “darker-skinned” women, the accuracy dropped precipitously to 35%.

Although the research was ground-breaking, it did have a chink. It tested the facial recognition technologies with a database of photographs of parliamentarians from Rwanda, Senegal, South Africa, Finland, Iceland and Sweden – in other words, mostly Black or White faces.

When Smriti Parsheera, a lead technology policy researcher at the National Institute of Public Finance and Policy in Delhi at the time, read the research, she sat up and took notice. The American researchers “wanted to understand skin shades, but there are no Indians in that dataset – there is a vacuum”, she said. “We don’t see this kind of research happening in India.”

So, last year, Parsheera and Gaurav Jain, a technology policy researcher, decided to use similar methodology to see how top facial recognition tools fared when presented with Indian faces.

The results were stark. They found that facial recognition tools failed far more in identifying Indian women than Indian men. On average, the gender of Indian men was inaccurately identified half a per cent of the time. For Indian women, that was more than 7%.

This unreliability would be disconcerting anywhere. But, given the prospective size and scale of these technologies in India, researchers argue that their adoption could lead to grave consequences if not accompanied by rigorous debate and appropriate checks.

India’s experiments with facial recognition technology include the Ministry of Civil Aviation’s DigiYatra for airport entry, Telangana police’s own system, and an authentication system by the Department of Defence. Credit: Noah Seelam/AFP.

Parsheera and Jain’s assessment marks the first time that researchers have evaluated facial recognition accuracy on Indian faces, in what a burgeoning group of researchers hope to be a longer journey in contextualising algorithmic fairness in an Indian context. Their paper, supported by the IDFC Institute’s Data Governance Network, was recently a part of a workshop under the 2021 Computer Vision and Pattern Recognition conference in the US.

Sobering Findings

The audit tested four commercial tools – Amazon’s Rekognition, Microsoft Azure’s Face, Face++ and FaceX – on the public database of election candidates on the Election Commission of India website to ensure geographic diversity and satisfy ethical concerns.

Out of all four, Microsoft failed to detect the largest number of Indian faces, a little over 1,000 – which makes it an error rate of a trifle above 3%.

Especially egregious was the Indian FaceX, which is produced by a company headquartered in Bengaluru. It misidentified the gender of Indian women almost 11% of the time, but that of men only 1.35% of the time. FaceX also failed to detect over 800 Indian faces, a 2.6% error rate. “We thought that FaceX might be better because it’s an Indian company so they should have clients in India and use Indian faces, but their results were quite surprising,” said Jain.

The finding holds particular weight because most Indian government contracts for facial recognition systems are won by Indian companies, says Divij Joshi, a tech policy lawyer and researcher who tracked almost 25 facial recognition projects while he was a Mozilla Fellow last year.

Another sobering finding made by Parsheera and Jain was that, even though some tools improved their accuracy after the 2018 US study – called Gender Shades – one of them still left Indian women undetected at high levels. The Chinese-owned Face++ was found to be misidentifying Indian women almost 15% of the time, even though it had improved its detection of Black women after the Gender Shades study by reducing the error rate from 35% to 4%.

Play

“The tools improved themselves for that particular demographic and those issues, but it’s not an improvement universally,” said Parsheera, who is currently a CyberBRICS Project fellow. “If you take Indian women, Thai women, etc., you’ll find your own set of issues. There also becomes this burden on civil society to keep digging to fix the narrow problem.”

The study also explored age prediction, which the researchers say seem to be highly erroneous across racial groups. None of the companies responded to repeated requests for comment sent by Scroll.in.

Racial Categories

As facial recognition technology becomes more pervasive, it is being used for everything from unlocking smartphones to smarter advertising. Law enforcement agencies use it too, mostly in two ways: they either run the photograph of a person of interest through a facial recognition system linked to a database of photos in order to identify the person, or they match a real-time photo of a person with an existing one to authenticate their claimed identity.

For the first purpose, the technology scans the geometry of the face, noting features such as the distance between the eyes, the shape of the cheekbones, or the contours of the lips. Based on these, the image is converted into a mathematical representation called a faceprint. This faceprint, which contains data about certain details to distinguish the face, is compared with a database of known faces.

Most research into bias and fairness in facial recognition technology rarely moves beyond standard White and Black distinctions, while a few include an “Asian” category that broad-brushes the vast variations in the continent, Jain says. “Most Global North researchers have a view of diversity and fairness that is very restricted,” he added. “Indians don’t even end up as a category. If the researchers do include Indian faces, they don’t go beyond data sets of a few urban elite audiences from New Delhi or any other urban city.”

A similar conclusion was drawn by Northeastern University researchers in a paper released earlier this year. The paper showed that large, fair computer vision datasets (similar to that made by the Gender Shades researchers) involve categorising faces into races, usually based on the US Census.

In describing how these racial categories are poorly defined, the researchers said, “The Indian/South Asian category presents an excellent example of the pitfalls of racial categories.” The Indian category is “obviously arbitrary – the borders of India represent the partitioning of a colonial empire on political grounds.”

The Northeastern University researchers, after expanding the racial category to “South Asians”, found that various tools trained by fair datasets showed the least consistency in their labels of White and South Asian, possibly meaning these categories are the least definable.

Play

Zaid Khan, one of the authors of that paper, says that the new Data Governance Network study fits into a larger research trajectory showing that “traditional racial categories don’t reflect diversity of humans very well, and fairness/bias based on Western racial categories don’t necessarily apply to the rest of the world.”

“My major takeaway is that commercial face recognition systems have very different accuracies for people from the Indian subcontinent,” he told Scroll.in. “One way to interpret this is that Indians aren’t well represented in the datasets of some companies. There may be no incentive for them to raise accuracy on faces from India.”

The problem is not limited to algorithms that deal with computer vision and facial recognition, according to a study by Google Research this year. “As AI becomes global, algorithmic fairness naturally follows. Context matters. We must take care to not copy-paste the western normative fairness everywhere... Without engagement with the conditions, values, politics, and histories of the non-West, AI fairness can be a tokenism, at best – pernicious, at worst – for communities.”

“AI is readily adopted in high-stakes domains, often too early (in India),” the study states. “Lack of an ecosystem of tools, policies, and stakeholders like journalists, researchers, and activists to interrogate high-stakes AI inhibits meaningful fairness in India.”

Slow Spread

In 2019, Carnegie Endowment for International Peace found that 85% of the 64 countries it studied were using facial recognition for surveillance. India is no exception. The Internet Freedom Foundation, an Indian NGO that conducts advocacy on digital rights, has tracked 64 facial recognition systems either in use or in the making, with an estimated financial outlay of Rs 1,248 crore.

The popular cafe chain Chaayos came under fire in 2019 for using facial recognition software to bill customers. Credit: Medianama via YouTube.

A steady customer of the technology is the government. This year, the Union government announced a pilot to deploy facial recognition at vaccination centres in Jharkhand as one of the methods for authentication. When the Internet Freedom Foundation probed further by filing a query under the Right to Information Act, the government confirmed that these images would be sent to the Unique Identification Authority of India, the department that manages the Aadhaar database.

In 2019, the National Crimes Records Bureau issued a heavily-discussed tender, extended at least a dozen times, for a National Automated Facial Recognition System that would identify faces in images and videos based off of existing criminal databases. This would require linking the system to other databases such as the Crime and Criminal Tracking Network & Systems, which were set up following the 2008 Mumbai terror attacks to integrate crime documentation across police stations and higher offices. Last year, the National Crimes Records Bureau revised the tender once more to include identification of faces wearing masks.

“This could be the largest facial recognition system in the world,” said Parsheera. “It got the attention of a lot of researchers, including me. It was a big trigger.”

In a comprehensive research paper on the technology, Parsheera argues that the National Crimes Records Bureau’s tender fails to pass the Supreme Court’s Puttaswamy test of privacy (legality, rational connection and necessity). The Automated Facial Recognition System, she explains, has no statutory basis and lacks user consent or knowledge, and without these, there are no procedural safeguards that could check its misuse.

Other experiments with facial recognition technology in India include the Ministry of Civil Aviation’s DigiYatra for airport entry (trialled at the Hyderabad airport), Telangana police’s own system, and an authentication system by the Department of Defence.

These adoptions come as the country has accelerated its use of CCTV cameras. Last year, the Supreme Court rebuked state governments for not installing CCTV cameras fast enough. Meanwhile, the Delhi government is in the process of doubling its cameras in the city to almost 6 lakh, with government schools being priority surveillance sites.

Researchers fear that the enthusiastic embrace of facial recognition technology in India has not been accompanied by meaningful debate or regulation. In ideal circumstances, a data protection law would have been a good start in circumscribing the technology. But the proposed data protection law in India includes wide exceptions for government activities.

Question Of Ethics

Against this background, the high detection failure rates of facial recognition technology assume a grimmer edge. The Delhi Police told the High Court in 2018 that its facial recognition software had an accuracy of only 2%. And in this egregiousness, the Delhi Police was not alone. According to a 2016 Georgetown report, the US Federal Bureau of Investigation found that only one out of every seven of its facial recognition queries was correct, potentially putting a large number of innocent people at risk.

India is enthusiastically embracing facial recognition technology as it accelerates the use of CCTV cameras. Credit: Anindito Mukherjee/Reuters

While delivering a talk at the Computer Vision conference, Jain warned that even small rates of failure could lead to heavy damage if the technology is used in a high impact area. For instance, if it is used for vaccination verification, an error rate of 3% could leave millions of Indians excluded from the only real protection against a rampaging virus.

Long before the Gender Shades study, the US National Institute of Standards and Technology found in 2003 that algorithms had a harder time recognising female or young subjects. It was only after the Gender Shades research, though, that Google removed gender classification on facial recognition and companies began to respond to concerns of bias.

Still, accuracy, researchers stressed, is a necessary, but not sufficient, condition to ethically use the technology. Parsheera wrote in 2019: “(Would) having facial recognition algorithms that are better at identifying individuals within more specifically defined classes necessarily lead to fairer outcomes? Or would it become an even more potent tool for targeted mis-treatment and discrimination?”

Joshi, who is currently conducting a global comparative review on technology regulation, says that proposed and ongoing use cases have significant implications on human and constitutional rights. “Most of these matters are thought of as public procurement, picking a particular technology problem to solve,” he said. “Procurement is not the best place to have policy decisions.”

Besides, Joshi says, those procuring the tools often know very little about how they function, swayed as they are by a surveillance sector that is a multi-billion-rupee industry in India alone.

“This is very different from existing forms of biometrics, like fingerprints,” Joshi said. Fingerprinting in India began in the 19th century as a colonial practice, but came with hefty regulatory standards. “We don’t have any of that in the case of facial recognition. It’s a void, of which the limits are yet to be tested.”

Adding to Parsheera and Jain’s study, Joshi says that biases in the Indian context may be exacerbated by the disproportionate share of tribal and lower caste citizens in Indian police systems. A study by technology researchers Vidushi Marda and Shivangi Narayan found how these biases were cemented in Delhi’s predictive policing system, called Crime Mapping, Analytics, and Predictive Systems.

“Accuracy of facial recognition is not the end goal,” Marda told Scroll.in. “Face recognition is dangerous when it works and also when it doesn’t work because the institutional, societal and historical context of its application mean that its use is discriminatory, disproportionately impacting vulnerable communities and also exacerbating existing social inequalities.”

What could be the solution, then? Jain says we must ensure more transparency of these models as a first step in tackling what is called the black box problem.

“The problem is we don’t have access to the model itself,” Jain said. “I don’t know if the bias is coming from the model or the data.” Perhaps the model is looking for the wrong features or the datasets used by companies are not representative of the population. “Do the companies not have Indian faces? Or are the faces fairly urban? If you don’t have good data, it’s garbage in, garbage out.”

Corrections and clarifications: An earlier version of this article misstated that Smriti Parsheera and Gaurav Jain’s paper had been accepted at the 2021 Computer Vision and Pattern Recognition conference; it was a part of the conference. The article also miscalculated the number of misidentified Indian women in the database. Both errors have been corrected.

Karishma Mehrotra is an independent journalist. She is a Kalpalata Fellow for Technology Writings for 2021.