On a hot May afternoon in India, when Hyderabad activist SQ Masood stepped out to get some paperwork done, he did not expect the police to stop and question him. Nor did he expect them to photograph him without consent.

At the time of the incident, Masood was passing through Shahran Market. It is an area inhabited by a number of Muslim residents in Hyderabad, the capital of Telangana. His father-in-law was also with him.

Profiled picture

“There were eight to 10 police officers there randomly stopping people and questioning them. Quite a few people saw them and turned back. I knew I was not doing anything wrong and was not afraid. So I did not turn back.”

A constable stopped Masood and asked him to take his mask off in order to photograph him. Masood was not violating lockdown restrictions and was well within the 6 am to 3 pm window (when Hyderabad authorities allowed for an easing of the lockdown). So he found this odd.

“I said I will not take off my mask until you tell me why,” Masood said. “There were two of them. They moved back a bit and photographed me with my bike and my mask on.”

“I think they were asking other people as well,” he said. “I was in a rush to get back so I am not sure. When I returned, I realised why I had probably been photographed.”

In 2020, the Indian government approved a plan to build the National Automated Facial Recognition System. Led by the National Crime Records Bureau, this system originally aimed to be able to extract facial biometrics from videos and CCTV footage, matching it with photographs present in existing databases.

Representational image. Photo credit: Anindito Mukherjee/ Reuters

Aiding rights’ violations

Across the world, facial recognition technology has played a role in wrongful arrests, intrusive surveillance and crackdown on protests. It is now outlawed in 13 American cities, including San Francisco and Boston. Regulators in Europe are also rethinking the indiscriminate use of facial recognition systems in public spaces. However, India is moving ahead.

Most privacy activists argue that the use of this technology is violative of human and digital rights.

Anushka Jain, associate counsel (surveillance and transparency) at Internet Freedom Foundation, an independent organisation that advocates for digital privacy and rights in India, told Unbias the News, “Research by people who have worked in these areas has shown that the use of facial recognition technology can never be consolidated with human rights.”

The Modi government appears to be leaning heavily on facial recognition for law enforcement and other purposes, in the absence of a data privacy law.

In December 2019, hundreds of thousands of Indian citizens came out to protest the Citizenship Amendment Act, a new citizenship law believed to be discriminatory towards Muslims and other marginalised communities. During the riots that followed (dubbed a targeted pogrom against the Muslim minority), Delhi Police used facial recognition system for over 100 of the 1,818 arrests they made.

At another protest against agricultural reforms (accused of favouring corporations over farmers) at Delhi’s historical Red Fort, facial recognition technology was used along with CCTV footage to arrest over 200 protestors.

Coerced inclusion

Other instances include the use of facial recognition to authenticate identity for public food grain distribution systems and other welfare programmes. Access to subsidised food rations, fertilizers, cooking gas, cash transfers and other social welfare benefits are also governed through a digital, biometric identification system known as UID (unique identification) or Aadhaar.

Internet Freedom Foundation’s Rohin Garg pointed out how this can lead to exclusion from state-funded welfare programmes: “If you are a migrant worker who works in construction, and you go to the ration shop, you have to press your thumb there to authenticate identity. Your finger might be so calloused that your fingerprint might have worn off from use, and you will be denied ration. So that is an issue.”

For Garg, “coerced inclusion” is also an issue raised by this tech. “What if I do not want to provide my face to the government? A lot of airports are using facial recognition technology to verify passengers instead of the usual flight tickets they used to hand out. What if you want to opt-out of something like that? There is no alternative available, and I would call this coerced inclusion,” he concluded.

Historically racist

There is strong evidence of various facial recognition systems displaying racial and gender bias, leading to false matches and exclusion.

Like Masood in India said, “Usually, these practices [of stopping or questioning citizens] have been happening in Old Hyderabad, where you will find more poor citizens, Muslims and Dalits.”

“This sort of thing does not happen in the new city [of Hyderabad],” he added. “They would not dare ask the people there to take off their masks.”

As MIT’s Gendershades programme conclusively proved, facial recognition systems developed by IBM, Microsoft and Face++ displayed relatively high accuracy overall but faltered in recognising certain genders and races. The systems, with overall accuracy of 87.9% to 93.7%, identified male faces 8.1% to 20.6% better than female faces, and lighter-skinned faces 11.8% to 19.2% better than dark-skinned faces. IBM has since worked on improving their system after digital activist Joy Buolamwini pointed out the bias.

A facial recognition counter at the Rajiv Gandhi International Airport in Hyderabad. Photo credit: Noah Seelaam / AFP

Basis of bias

“It is important to look at how this bias develops,” said Internet Freedom Foundation’s associate counsel Anushka Jain. “This bias develops because the data set based on which these algorithms are developed themselves are biased. So if you get a technology that has been developed on data sets that are predominantly white in that situation also they will not be able to identify people of color in India.”

“And even if these systems are being developed in India, not everyone in India has a similar skin tone,” she added. “There are differences in colour, which have to be taken into account. Just because we do not have skin tones as disparate as in the United States does not mean that facial recognition technology is automatically more accurate in India.”

‘Possible, not positive’

Not everyone agrees, however, that facial recognition systems are biased. Michael Furia, analyst certified in Adobe Photoshop and detective in law enforcement, told Unbias The News, “To say facial recognition technology is biased is ridiculous to me.”

“I would never claim to have a positive match on someone if I was not positive and even then, we say it is possible, not positive,” he said. “The idea that Asians or women are more likely to be wrong is not accurate. It is just more difficult to distinguish facial identifiers between Asians and women because of make-up and beauty alterations in mugshots such as eyebrows tattooed on, unusual haircuts and so on.”

“Let us face it: you do not see blonde Asians with freckles and sunburn too often,” he said. “You do not see the majority of women sporting beards and trimmed facial hair as often as men.”

Law enforcement worldwide has largely resisted the idea of regulating or banning facial recognition technology.

Targeting vulnerable

Global investigations, reports and documentaries on technology indicate minority groups are the worst affected by invasive tech in the hands of majoritarian governments.

Talking about the time when cops photographed him on the streets, Masood shared, “My apprehension is that if they did not use my photograph to generate a challaan [a document issued upon a violation of traffic rules by the police], why did they take a photograph at all? Have they deleted it or stored it in some database? If they have stored it, what will they do with it? Which other databases will they link it to, and who will they share it with? For what purpose will they use it?”

Building a database is critical to the National Crime Records Bureau’s plan to deploy the Automated Facial Recognition System across the country.

In a 172-page Request for Proposal released by the NCRB in 2019 (and since removed but obtainable in cache), the originally desired specifications for such a system are laid out in detail: “The system shall be able to broadly match a suspect/criminal photograph with database created using photograph images available with Passport, Crime and Criminal Tracking Network and Systems, Interoperable Criminal Justice System and Prisons, Ministry of women and child development (KhoyaPaya) State or National Automated Fingerprint Identification System or any other image database available with police/other entity. Match suspected criminal face from pre-recorded video feeds obtained from CCTVs deployed in various critical identified locations, or with the video feeds received from private or other public organisation’s video feeds.”

An excerpt from the original 2019 ‘Request For Proposal To procure National Automated Facial Recognition System’ from NCRB.

In response to legal demands by Internet Freedom Foundation, the original request for proposal was removed and replaced with one that excludes references to using facial images from CCTV video feeds. However, the vagueness of the new language still leaves the door open for video footage to be used, according to Internet Freedom Foundation.

The potentially broad scope of the programme is chilling, particularly in a country where law enforcement has openly committed atrocities against Muslims and has repeatedly displayed a communal and casteist attitude.

Not surprisingly, Masood expressed concern over where his photograph could end up: “Telangana police have developed an app called TSCOP that every constable has, and they have linked different databases to it.”

“Databases of prisoners, history sheeters and others I do not know of,” Masood said. “If my photograph ends up in that database, what category will it fall under? I do not know that and it worries me, especially since I am Muslim and being a Muslim in India . . . well, you know how it is.”

Surveillance weapon?

Civil and digital rights activists globally have been calling for a complete ban on facial recognition systems. Anushka Jain from India’s Internet Freedom Foundation told Unbias The News that even though the organisation is calling for a complete ban on facial recognition systems, they understand why it might not be feasible in India and are willing to work with the government on building safeguards and regulating the use of this technology.

But not everyone wants a ban.

“I feel banning facial recognition is similar to telling a detective not to use a computer to conduct an investigation,” Michael Furia told Unbias the News. “Like a computer, facial recognition is not a must-have tool but it saves days, weeks, or months of investigating.”

“Facial Recognition is never telling or giving you permission to lock someone up,” he added. “It is merely a lead telling you that the person of interest looks almost identical (in most cases) to the suspect of the crime and you may want to speak to that person or keep an eye on him until you develop probable cause. I can not see how that is a violation of anyone’s rights and am very angered to think the public would not want to have such a valuable tool at law enforcement’s hands when there are so many shootings and even terroristic acts in the modern world.”

Even though law enforcement and governments continue to be at loggerheads with privacy and digital rights advocates around the world on the use of facial recognition systems, repeated human rights violations across the world show that the use of this technology comes at a steep cost.

The question is: can we afford it?

This article first appeared on Unbias the News.