In this version of the future, people will still have a role working alongside smart systems: either the technology will not be good enough to take over completely, or the decisions will have human consequences that are too important to hand over completely to a machine. There’s just one problem: when humans and semi-intelligent systems try to work together, things do not always turn out well. Like almost all of today’s autonomous cars, a back-up driver was there to step in if the software failed. The so-called Level 3 system is designed to drive itself in most situations but hand control back to a human when confronted by situations it cannot handle.

“If you’re only needed for a minute a day, it won’t work,” says Stefan Heck, chief executive of Nauto, a US start-up whose technology is used to prevent professional drivers from becoming distracted. Without careful design, the intelligent systems making their way into the world could provoke a backlash against the technology. Preventing that will require more realistic expectations of the new autonomous systems, as well as careful design to make sure they mesh with the human world. “Does the AI make us feel more involved – or is it like dealing with an alien species?”

The semi-driverless car is a particularly stark example of a near-autonomous system that relies on close cooperation with people. In the real world, people often make decisions about situations they have not previously faced.

Research from Stanford University has shown that it takes at least six seconds for a human driver to recover their awareness and take back control, says Mr Heck. But even when there is enough time for human attention to be restored, the person stepping into a situation may see things differently from the machine, making the handover far from seamless.

“We need to work on a shared meaning between software systems and people – this is a very difficult problem,” says Mr Sikka. A second type of human/machine cooperation is designed to make sure that a sensitive task always depends on a person – even in situations where an automated system has done all the preparatory work and would be quite capable of completing the task itself. Military drones, where human “pilots”, often based thousands of miles away, are called on to make the decision to fire at a target, are one example. Both show how AI can make humans far more effective without robbing them of control, says Mr Heck.

According to Stuart Russell, an AI professor at the University of California, Berkeley, it would be a short and easy step in a national emergency to remove the human drone operator from the loop, precipitating an era of robot weapons that make their own decisions about when to kill people.

“You can’t say the technology itself can only be used in a defensive way and under human control.” A final type of “human in the loop” system involves the use of AI that is not capable of handling a task entirely on its own but is used as an aid to human decision-making. Algorithms that crunch data and make recommendations, or direct people in which step to take next, are creeping into everyday life. The algorithms, though, are only as good as the data they are trained on – and they are not good at dealing with new situations.

People required to trust these systems are often also required to take them on faith. The outcome of these computer-aided decisions may well end up being worse than those based on purely human analysis, he says. “Sometimes people will blindly follow the machine, other times people will say: ‘Hang on, that doesn’t look right.

But what happens when the stakes are higher? IBM made medical diagnostics one of the main goals for Watson, the system first created to win a TV game show and then repurposed to become what it calls a more general “cognitive” system. “Simply saying they’ll still make the decisions doesn’t make it so.” Similar worries surfaced in the 1980s, when the field of AI was dominated by “expert systems” designed to guide their human users through a “decision tree” to reach the correct answer in any situation.

But the latest AI, based on machine learning, looks set to become far more widely adopted, and it may be harder to second-guess.

Non-experts may feel reluctant to second-guess a machine whose workings they do not understand. Technicians had no way of identifying the flaw and the machine stayed in use much longer as a result, says Mr Nourbakhsh.

Unlike the logic circuits employed in a traditional software program, there is no way of tracking this process to identify exactly why a computer comes up with a particular answer.

Some experts, however, say headway is being made and that it will not be long before machine learning systems are able to point to the factors that led them to a particular decision. Like many working in the field, he expresses optimism that humans and machines, working together, will achieve far more than either could have done alone.

He had already founded and sold off several successful consumer technology companies, but as he grew older he wanted to do something more meaningful, that is, he wanted to build a product that would serve the people that technology startups had often ignored. Both my friend and I were entering the age at which our parents needed more help going about their daily lives, and he decided to design a product that would make life easier for the elderly.

It sounded like a wonderful product, one that would have a real market right now.

But once those material needs were taken care of, what these people wanted more than anything was true human contact, another person to trade stories with and relate to. If he had come to me just a few years earlier, I likely would have recommended some technical fix, maybe something like an AI chat bot that could simulate a basic conversation well enough to fool the human on the other end.

But there remains one thing that only human beings are able to create and share with one another: love.

Despite what science-fiction films like Her – in which a man and his artificially intelligent computer operating system fall in love – portray, AI has no ability or desire to love or be loved.

I firmly believe we must forge a new synergy between artificial intelligence and the human heart, and look for ways to use the forthcoming material abundance generated by artificial intelligence to foster love and compassion in our societies.

Excerpted with permission from The Tech Whisperer: On Digital Transformation and the Technologies that Enable It, Jaspreet Bindra, Penguin Portfolio.