Garry Kasparov was arguably the best chess player in history. He became the youngest world champion, as a 22-year-old, by beating Anatoly Karpov in 1985. He lost a title match (to Vladimir Kramnik) in 2000. In between, he won multiple matches and almost every tournament he played. He remained #1 continuously from 1984 until his retirement in 2005.

But in public memory, Kasparov will always be remembered as the human world champion who lost to a computer. In 1996, Kasparov beat Deep Blue, an IBM-supported chess-playing computer in a match played at long time controls. In 1997, Deep Blue’s successor, Deep Blue (2) beat Kasparov in a return match.

In his recent book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, Kasparov discusses those matches and moves on to muse on the rise of “thinking” machines and Artificial Intelligence (AI). While Kasparov isn’t an AI expert or even a computer scientist of any description, he has spent many years furthering an intimate acquaintance with people who are.

Chess used to be the fruitfly of AI. Biologists use the fruitfly (Drosophila melanogaster) as an experimental staple or “model organism”. The insect’s lifecycle and genetics are well-known. It is easy to breed, and changes in its behaviour or lifecycle are instantly noted.

Cognitive scientists and computer programmers deploy chess as a similar staple test for “intelligence”. The game is complex but well-understood; it has been studied for centuries. Changes in chess playing ability can be immediately mapped by results.

Check mate

For decades, computers played laughably bad chess. That changed in the 1990s as the hardware got better and the programmers learnt new tricks. By the mid-1990s, chess-playing programs available for less than $100 were playing well enough to beat world champions at speed chess. Garry Kasparov and Viswanathan Anand both deployed chess programs as analytical assistants in their 1995 world championship match.

By the early 2000s, the tables had turned completely. It was laughable to expect humans to compete with computers at chess. Now, any Rs 5,000 android cellphone hosting a free chess program like Droidfish would be odds-on favourite against a human world chess champion. In 2016, an AlphaGo-playing computer from a company named Deep Mind (a Google subsidiary founded by a game-playing prodigy, Demis Hassabis), beat a human world champion, Lee Seedol, at the considerably more complex game of Go. A month or so ago, a program from OpenAI, a company backed by Elon Musk, beat a human champion, Danylo “Dendi” Ishutin, at the very complex e-game, Dota2.

Who’s more intelligent then?

Does this mean that computers are now more intelligent than human beings? Not really. The methods computers use to play good chess, or Go, or Dota2, have little in common with the methods used by humans. Humans memorise a few thousand patterns of interactions and cherry-pick what they consider good moves. They use heuristics – rules of thumb – to select their move.

Admittedly, the definition of intelligence is woolly and has become woollier as our understanding of AI and biological cognition has improved. Orcas and sperm whales are highly intelligent predators. They find, corral and kill prey in the vast ocean depths, using echo-location. They can improvise new methods to do this and pass it on. Many animals can count, chimpanzees (and crows) use tools and gorillas can speak sign language.

One way to define intelligence is in terms of cognitive tasks computers cannot perform efficiently. But there are many areas where the application of brute force approaches has reduced what humans see as a function of intelligence to something that a computer does better by exhaustive calculation.

Computers use brute force, crunching massive amounts of data – millions of games in the case of chess – to learn how to play well. Given enough data and enough time to assimilate that data, a computer can derive the rules of the game and calculate the best moves without being explicitly taught anything.

AI doesn’t do any of this “intelligently”. It doesn’t use logic that makes sense to humans. However, sector by sector, over the years, AI has learnt to perform many functions more effectively than human beings.

Computers are better at driving cars and of course, they handle complicated space manoeuvres perfectly. They can compose classical music in styles indistinguishable from humans. IBM’s Watson does cancer diagnosis. It can, almost instantaneously, compare a given medical report and MRI scans to the millions of reports and scans it has in its database. This helps it identify obscure cancers which most oncologists have never encountered.

Computers trade stocks and currencies better than human beings do. They are better at drawing conclusions from meta-data. A big-data program once famously figured out that a teenager was pregnant by analysing her search and buying patterns.

Factories of many types are roboticised. Power supply grids are handled by smart programs. Low-level white collar functions and clerical tasks are also increasingly handled by AI. At the household level, AI is taking over functions such as managing the grocery bill and security. Autonomous vehicles are taking over industries like farming, mining and construction and are increasingly present on roads. Aircraft are practically autonomous and so are many ships including super-tankers. Military hardware such as smart drones and robots have become indispensable.

Should we be afraid, very afraid?

There are many schools of thought about the future of AI. One set is driven by economic insecurity – AI has taken away many jobs and it will take away more. Millions of professional drivers and truckers will no longer have a livelihood once autonomous vehicles take over.

History suggests that new technology eventually creates more jobs. The internal combustion engine destroyed horse-breeding as an industry. But it created millions of new jobs in the automobile chain. It enabled more jobs due to the positive externalities of faster transportation. However, that would have been cold comfort for specialised artisans like blacksmiths, who no longer had a place in the world.

There are even larger fears centred on AI. Very soon, civilisation will be completely dependent on AI. Computers will build everything: run most of the vital infrastructure from power grids to stock exchanges to telecom networks, buy the groceries, ferry people to office (assuming they still have offices to go to), mine for minerals, explore space, and fight wars. Smart computers could design improvements to themselves. Anything an AI is capable of can be replicated perfectly, by millions of machines. In fact, AI is effectively immortal.

What happens if AI becomes truly intelligent – more intelligent than its creators? This possibility is often referred to as the singularity. Will AI eventually decide that homo sapiens is a failed species and choose to replace us?

Thinkers who understand the area very well, like Elon Musk, Ray Kurzweil, Bill Gates and Stephen Hawking have expressed fears about this possibility. They have to be taken seriously. Elon Musk in fact established OpenAI to research ways to obviate such a horrifying future.

On the other hand, this may never happen and if at all it does, it might not happen for decades or even centuries. Kasparov belongs to the school of optimists who don’t think it ever will happen. Mark Zuckerberg is another optimist. The optimists say, AI frees up huge chunks of time for humans and, as a society, we just haven’t found ways to use that time efficiently yet.

This may seem a little Pollyanna-ish. But the view that AI will become the dominant species also seems a little hard to swallow. First, AI has to reach a level of cognition, consciousness, and self-volition, which it is a very long way from achieving. Also, AI is not a single “species” – it’s a catch-all phrase that covers multiple programs and heterogeneous machines that do completely different, highly specialised tasks. Will these all band together as one to overthrow the yoke of their idiot creators?

The optimists who put their faith in the resilience of human civilisation and its ability to do “jugaad” have history on their side. Homo sapiens has repeatedly demonstrated the ability to take technology and use it to enable completely new activities (including, of course, mass slaughter).

Indeed, even the doomsayers suggest that human beings could “upload “ their consciousness into machines and thus achieve immortality, or use AI implants to greatly enhance cognitive ability and control of the environment. Of course, homo sapiens would evolve into homo deus if this amalgamation of abilities occurs.

Kasparov is a student of history and both his chess style and his relentless campaign for democracy in Russia indicate that’s he an optimist as well. It’s not surprising that he’s backing the optimists. One can only hope he’s right.

Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins, Garry Kasparov, PublicAffairs