In 2012, in United States’ Santa Cruz, a company called Predpol Inc devised a software that promised to predict future criminal activities by analysing past criminal records and identifying patterns. This simple idea of “predictively policing” an unsuspecting population aimed to change the face of law and order in the US.

Police departments in major US cities began to use such predictive technology in their efforts to curb crime.

Facial recognition in India

In India too, such artificial intelligence tools are increasingly being put to use. For instance, during his annual press briefing in February, the Delhi police commissioner said that 231 of the 1,818 people arrested for their alleged role in the 2020 Delhi riots had been identified using technological tools.

Of them, 137 suspects had been arrested with the help of facial recognition technologies that scan the faces of people in crowds and map them against existing databases, the officer was reported as saying in The Indian Express.

The Internet Freedom Foundation notes that Indian authorities have installed at least 48 facial recognition systems.

The police department’s use of technology is not just limited to facial recognition. In Delhi, it has also been using tools for predictive policing such as The Crime Mapping, Analytics and Predictive System, a predictive system that analyses data from past and current phone calls to police hotlines to predict the time and nature of criminal activities in hotspots across the city.

This is not without huge risk. Across the world, the effectiveness of artificial intelligence tools is being called into question. These technologies have been known to increase bias, resulting in inaccurate judgements for minorities and lead to further exclusion of marginalised communities.

This is because artificial intelligence systems across the world are built on pre-existing data, which is often biased against certain groups due to the prejudice among the recording authorities or plain inaccuracy.

The Status of Policing in India 2019 report, based on a survey of 11,834 personnel, found the force riddled with biases. Photo credit: Arun Sankar/AFP

For instance, a study published last year by researchers Vidushi Marda and Shivangi Narayan focusing on Delhi’s Crime Mapping, Analytics and Predictive Systems found various sources of bias. These include inaccuracy in data, particularly in the severity of the crime reported, biased representation of certain religious groups and prejudiced recording, especially in the case of crimes occurring in poorer communities.

It is apparent the problem is not just technology – the problem is society. The prevalence of deep-rooted biases makes it harder to expunge the societal prejudices of the designers of these programmes from the data and the technologies they develop.

The historical bias in the data has deeper roots going back to the British-era practice of categorising entire communities as criminal, condemning them from birth. This was done through legislations such as the Criminal Tribes Act 1871, Criminal Tribes Act 1924 and, after Independence, the Habitual Offenders Act 1952.

This primitive form of predictive policing deprived groups listed under the Act of any dignity. They were denied opportunities and marginalised by society and the law by virtue of the crimes that they had not even committed. The remnants of this practice find traces in today’s policing, not in a legal framework but in a socio-mental framework.

The Status of Policing in India 2019 report, based on a survey of 11,834 personnel, found the force riddled with biases. Over half of the police officials interviewed said that they believed that Muslims are more likely to commit crimes. Similar biases were found in the report to exist against Dalits, Adivasis and certain caste communities.

Data bias can be mitigated through audits and analysis that can identify patterns in datasets, point out anomalies in over or under-representation of communities, and take a more collaborative approach to develop such systems. It is the organisational and institutional bias that is dangerous and harder to get rid of.

Biased tech

When London’s Metropolitan police implemented a predictive policing tool that tracked gang members who they believed could perpetrate extreme violence following the London riots of 2011, its system called the Gang Matrix was almost universally criticised.

Recent reports by Amnesty International and the UK data protector, the Information Commissioner’s Office, highlighted serious problems, such as the fact that more than 70% of people flagged by the Gang’s matrix were black.

Similarly, an external analysis of the New York Police Department’s predictive policing tool “Patternizr” tool found that despite a concerted effort to remove caste and gender markers from the historical data used to train the tool, the risk of racial profiling of criminals and racially biased identification loomed large.

But the Indian police establishment treats this complexity in policing systems as a feature rather than a bug, making these systems closed to scrutiny, either internal or external. This in fact makes it harder to evaluate and mitigate the biases and undesired impacts of these systems on communities that tend to be overrepresented in police registers. It is as if the establishment wants to use the predictions of the tools as a means to justify their own prejudices.

In June 2020, Santa Cruz banned Predpol.

While it seems difficult to stop the deployment of such systems, there is still time to rethink their design and the organisational structures in which they are implemented. The poor data systems of the Indian police, historical challenges related to transparency in the processes clubbed with a poor record of treatment of certain communities require greater scrutiny.

Besides, it is important to establish a regulatory body and accountability measures to examine the implementation of these tools. While the case not using predictive policing stands strong, if this technology is used, it should be used with a strong guiding principle – protecting all of the citizens of India.

Gaurav Jain is a Fellow and Raghav Chopra is the Program Manager at the Young Leaders in Tech Policy Fellowship, International Innovation Corps.