Why is it that at a time when human existence is threatened by climate change, the future of work is threatened by automation and the future of every other living beings on the planet are threatened by humans, we are increasingly electing regimes guaranteed to destroy life as we know it?

There’s a frightening answer. Technology, which was supposed to make our lives better, is actually threatening the future of democracy. Without careful design and collective struggle, our default state might be to heighten authoritarianism, clamp down on dissent, reinforce borders and erect new ones.

Technology, it’s clear, plays a big role in strengthening the forces of authoritarianism. Eventually, it will play a big role in helping us imagining a better future. But the current moment belongs to authoritarian forces fighting for more than their fair share of a shrinking pie. One way they’re able to take more than they’re due is by drawing our attention away from where it needs to be, shifting our gaze towards powerless victims instead of tackling the problems these powerful forces have created.

Nevertheless, the authoritarians get it right in one respect: they articulate a world in crisis better than anyone else. The vision of fear they have created is more believable than the liberal intelligentsia’s vague pronouncements of universal humanity. When that fear congeals in the form of immigrants and traitors rather than corporations and the 1% who own the majority of the world’s resources, a falsehood is perpetrated. Whatever the right wing’s challenges with facts and reason, it understands emotion better than progressives.

Money and power make it easier for them to spread this emotional register. Recent events in India are a case in point.

Algorithmic politics in Kashmir

On August 5, the Narendra Modi regime changed the equation between the state of Jammu and Kashmir and the central government. The press has reported how the announcement was preceded by Amarnath Yatra being cancelled, thousands of soldiers being flown to the Valley and the entire Kashmiri political class placed under house arrest. The internet was complete shut down in Kashmir and remains so. Residents cannot Whatsapp their friends, send them videos on Tik-Tok or Snapchat or use messaging services to organise protests or rallies.

India leads the world in temporary shut-downs of the internet. From local bureaucrats to Home Minister Amit Shah, government officials cite public security as a reason to suspend a channel that has become the routine mode of communication for most Indians. Since the medium is the message, the politics of free speech is the politics of the internet. The shutdown of WhatsApp, however temporary, is how the government controls people’s minds.

Moreover, the shutdown is temporary by design.

Credit: pxhere.com

Attention being the scarcest resource today, the way to control minds is by controlling attention, whether by making people focus where businesses and governments want them to (white holes) or by creating black holes of information where they would rather people didn’t look. There’s absolutely no advantage in making that black hole permanent because attention is fickle and it keeps shifting from one spectacle to the next. Smart governments and businesses are constantly creating and destroying white holes and black holes. From managing expectations about jobs to creating new dystopic images about anti-nationals, every modern state is in the business of constant focusing and refocusing of citizens’ attentions.

We must treat with scepticism the conspiracy-based explanations that there’s a hyper-intelligent cabal of scheming businessmen and politicians directing minds as they see fit. Instead, it is more plausible that the rapid shifts of collective attention are systemic properties that cannot be ascribed to individual manipulators.

Instead, we need to more closely examine “algorithmic politics”: the combination of streaming data, political calculations and emerging opportunities. This form of politics is gathering strength, as is evident from the Bharatiya Janata Party victory in 2014 and its consolidation of power in 2019. In that, they were helped by algorithmic media – media forms that sample the data streams and sense opportunities that are then circulated in social media, especially WhatsApp. That circulating disinformation in turn creates new opportunities that are sensed and turned into viral stories. Lynchings are a paradigmatic, if unfortunate, example.

None of this can be planned or controlled beyond a point. The winners at algorithmic politics are those who understand the inherently complex nature of the underlying system.

In hindsight, it’s clear that the print and broadcast media created new forms of democratic politics as well as fostering new forms of authoritarianism. Why would it be any different with algorithmic media?

Is resistance futile?

Yes, unless the tree-huggers figure out how to capture and manage attention as well as the tree-cutters. In order to do so, they have to grasp how the attention economy differs from ideology and propaganda.

Attention management

It’s clear that the algorithmic management of attention is substantially different from what we used to call propaganda, just as paying money to Google to rank highly on certain keywords is substantially different from launching a traditional print ad campaign. Of course, both are forms of advertising. But there’s a world of difference in how the online ads are placed in front of a customer and what the customer does with the ad when they are attracted to its message.

Similarly, political advertising today is much more targeted. Propaganda identifies a uniform, faceless threat: it’s the Jewish person, the communist, the Muslim. In contrast, the ideal algorithmic violence is personalised, localised and context dependent.

It is about identifying a specific yet random individual who carries an undesirable identity. It’s specific in that the violence is always directed at a particular black or Muslim or LGBT person who happens to be in your vicinity, someone you can see with your own eyes. It is random in that the perpetrators of violence couldn’t care less about that person’s individuality as long as they belong to a certain target identity. Specific yet random is the logic of personalised attention in the age of machine learning.

Credit: pxhere.com

The widespread availability of the specific randomness is impacting politics as much as business. One reason the world is seeing new forms of political violence, I contend, is because of this algorithmic media. In India, we see it in the emergence of lynching as the chief instrument of street violence. In the US, it is evident in the increasing numbers of mass shootings. In both cases, it’s as if a machine learning algorithm infected the brain of a lynch mob or a gun-toting avenger and turned his mind to violence.

In propaganda, there’s a strong connection between the official party line and the violence on the street. Intellectuals were murdered during China’s Cultural Revolution because Mao said so. In contrast, there’s a tenuous link – if any – between the pronouncements of US President Donald Trump and the shooter in the street.

The recent upsurge in Artificial Intelligence has been driven by deep learning algorithms that work well but are opaque to our understanding. That makes them simultaneously smart and shallow. Small changes in images (called adversarial images) can fool these algorithms in ways that no human would be fooled. The computer scientist and Artificial Intelligence pioneer Judea Pearl says that deep learning fails because of its lack of grasp of causality – that is the instinctive knowledge you have, say, of being the person who kicks a football or knowing that a player on the other team prevented you from doing so.

Algorithmic politics throws up the same problem of causality but at a much bigger scale. For example: do we really know what causes lynchings? It’s easy to blame the current dispensation saying they have created an atmosphere of hatred. It’s equally plausible to absolve them of any blame by saying that it is almost impossible for a politician to ontrol what’s done on the ground in some village in Rajasthan.

What’s missing is an understanding of the complex chains that connect ideology with violence.

Credit: pxhere.com

Google, the all-powerful search engine that fields our queries about life, the universe and everything else, doesn’t care whether they understand the causality behind their models as they are predictive. Consider the now infamous case when black people were labeled as gorillas by Google’s algorithms – a clear example of prediction based on racist searches than any causal understanding of species differences. The solution was to simply stop labeling gorilla photos rather than the much harder problem of working causal structures into their algorithms – it’s acceptable if the algorithms make obvious mistakes or contribute to racial profiling as long as business continues as usual. In contrast, progressive politics of any kind will have to care about real people and therefore, questions of causality are crucial.

The future of politics isn’t between left and right, but between predictors and explainers. Predictors use data to drive people’s emotions in the direction they want without care about who is hurt and how. Their target is the specific yet random person. Predictive politics is the political equivalent of Google’s ad words, which auctions keywords to potential advertisers, so that when you search for fever symptoms, it shows you ads for nearby doctors.

By contrast, explainers care about the actual people behind their statistical signatures. Progressive politics should privilege explanations over predictions. It’s harder in every sense of that term.

Rajesh Kasturirangan is a mathematician and cognitive scientist.