It was a case of cold comfort when I ran into a reel of an influencer applauding delivery partners in India. While in Canada, she has to go to the post office herself to return a package. In India, a delivery partner picks it up at no cost from her location, saving her time and money. “I love my country,” she gushed. For this influencer and the many people commenting on her reel, this is a marker of progress and development, a sign of how far India has come. “India, India, Indiaaaa,” one user commented, a rallying cry for a system that dehumanises the individuals making it function: its workers.

The illusion of convenience

Delivery partners are not employees. They are gig workers with algorithms as managers and an HR department. A handful of powerful tech companies control the livelihoods of this vast, globally dispersed workforce subjected to constantly shifting rules, opaque pay scales, and arbitrary decisions made by AI systems they can't understand or appeal to.

In Code Dependent: Living in the Shadow of AI, Madhumita Murgia interviews some of these gig workers in a comprehensive attempt to understand the far-reaching consequences of our increasing reliance on artificial intelligence. (If you think you don’t use AI, you’re wrong). Each of the ten chapters in the book explores a different aspect of human life impacted by AI: our livelihoods, bodies, identities, health, freedom, safety nets, bosses, rights, futures, and societies, all pointing out the inescapable tentacles of this technology. While acknowledging AI’s potential benefits, Murgia primarily focuses on the real-world experiences of individuals whose lives are profoundly impacted by these technologies, often in unsettling and unexpected ways.

These precarious lives are eclipsed by the convenience that the relatively more privileged enjoy in their day-to-day lives: a book delivered to our doorstep in 24 hours, a piping hot meal without leaving our kitchen, groceries without stepping foot outside. No wonder then that all we see is convenience. It’s logical and easy to mistake this for a characteristic of a developed society, and almost like clockwork, the human costs involved become an aside. This is the very logic that Code Dependent dismantles.

Algorithms without accountability

The digital architecture that enables gig work has a lack of accountability written into it. When a tech company delegates managerial tasks to algorithms, it implies that technology cannot make mistakes. It's infallible, better, and cheaper than a human workforce. “The system can’t be wrong, what have you done?” asks the customer service person to Alexandru Iftimie, a Romanian immigrant living in London working as an Uber driver, one of the gig workers Murgia interviews.

When Iftimie started receiving automated warnings from Uber accusing him of fraudulent activity, he had no idea what he might have done wrong, nor any way to appeal the algorithm’s judgement. Eventually, Iftimie joined a lawsuit against Uber, and the Amsterdam court ruled that Uber must provide him with the personal data it holds on him, including the information used to flag him for fraud. But it didn’t mandate disclosure of how its fraud-detection algorithms work, leaving other drivers vulnerable to similar issues.

Such inaction and passivity towards the consequences of unchecked AI usage are symptomatic of how the rapid advancement and implementation of AI technology disproportionately impact the world’s most vulnerable populations. Code Dependent reinforces the arguments of academics and activists exploring the idea of data colonialism – a theory that explains how the exploitation of marginalised communities through the extraction of their data benefits powerful technology companies, often with little value returned to those communities.

Through conversations with the hidden workforce of data labellers and content moderators, Murgia unpacks how the workers who provide, collect, and organise copious amounts of data – the raw material without which the industry would cease to exist – are often excluded from its benefits and further subjected to algorithmic control and surveillance.

The “blameless” machine

Murgia also talks to women harassed using deepfake technology, software engineers wooed by data, lawyers attempting to hold Big Tech accountable, and inclusive healthcare technologists to unlock an open secret about how the repercussions of AI are not blameless. She breaks down how this delusion of irreproachable AI is a fable trickling down from the top ranks of tech companies and champions of unfettered AI development. AI itself does not have moral agency or intent – or rather, no sentience – so it’s “blameless” in the sense that it does not consciously choose its actions.

That doesn’t mean there’s no one to blame, even if these applications’ engineering makes it seem so (the system can, in fact, be wrong). Although perceived as infallible, these systems are designed by humans, trained on data selected by humans, and stationed in contexts defined by humans. Every step of the process involves human choices that can introduce bias, prioritise some values over others, and potentially lead to harmful outcomes. Murgia traces the lines of responsibility for harm caused by AI to the corporations and governments that design, deploy, and benefit from it.

The increasing use of AI in sectors like healthcare, policing, and welfare leads to a “digital welfare state” where algorithms make consequential decisions about people's lives with little accountability or opportunity for redress. Amsterdam’s ProKid algorithm identified youth at risk of criminality using data such as police contacts, family history, and socioeconomic factors, disproportionately flagging children of colour and single mothers.

Families on the “Top400” and “Top600” lists faced unexplained surveillance and interventions they couldn’t refuse. Even more harrowing is the chapter in which Murgia unknots the AI-powered machinations that enable the dictatorship in China and the subsequent dehumanisation of Uyghur Muslims. Thanks to facial recognition systems that utilise pseudoscience to arbitrarily infer an individual’s behavioural tendencies, minorities in China live a life where an uncontrollable face spasm could maybe land you in jail.

The myth of democratisation

The myth of democratisation follows the myth of infallible objectivity. The lore goes: As long as you have a smartphone, you can avail yourself of the same services that people from the social classes above you have kept gate-kept for so long, even if it comes with some limitations. If Netflix’s Mobile Plan for India at Rs 199 per month comes to mind, you’ve hit the nail on its head.

This myth applies not only to the products you consume and how you consume them but also to how you work to make that consumption possible. When Uber promises flexibility and control over their working hours to its gig workers, it’s allowing them to have the same autonomy as C-level officers, but at some cost. Your Rs 199 Netflix comes with Standard Definition, and your job doesn’t guarantee security.

On Twitter, you’ll see tech enthusiasts hoist the word “democratisation” in the same sentence as whatever new AI-powered tool has entered the market. This smokescreen overshadows the terrifying reality, which includes instances of key researchers tasked with ensuring that AI systems are aligned with human values and goals resigning from major AI companies, citing disagreements over the company's direction and strategy for ensuring AI safety.

Murgia calls for greater transparency, regulation, and ethical considerations in the current development and deployment of AI, emphasising the need to prioritise human dignity and ensure that the technology benefits all of humanity, not just a powerful few.

She’s not interested in fear-mongering but instead highlights the importance of transparency and accountability in AI development. It’s easy to slip into a state of resignation when you encounter the horror stories that Murgia tells with unabashed clarity, but as a narrator of these real voices affected by AI, it’s precisely this helplessness she warns you against. You do, in fact, have more control than you think. The inclusion of diverse voices and perspectives in AI development could lead to more equitable and ethical outcomes. “It’s always been here,” she seems to be saying, “you need to know what it does.” You cannot accept AI at face value, especially when it's too busy scanning your face.

Code Dependent: Living in the Shadow of AI, Madhumita Murgia, Pan MacMillan.