The European Union looks set to ban some of the most concerning forms of artificial intelligence, such as the “social credit” surveillance system used in China, according to draft AI regulations published by the bloc.

The proposed regulations, which will be reviewed by elected representatives before passing into law, will also bring some comfort to those outraged by instances of bias and discrimination generated by AI.

These include hiring algorithms found to systematically downgrade women’s professional profiles and flawed facial recognition technology that has led police to wrongfully arrest black people in the United States. Such AI applications are regarded by the EU as high-risk and will be subject to tight regulations, with hefty fines for infringement.

This is the latest step in the European discussion of how to balance the risks and benefits of AI. The aim appears to be to protect citizens’ fundamental rights while maintaining competitive innovation to rival the AI industries in China and the US.

The regulations will cover EU citizens and companies doing business in the EU and are likely to have far-reaching consequences, as was the case when the EU introduced data regulations in 2018. The proposals are also likely to inform and influence the United Kingdom, which is currently developing its own strategic approach to this area.

Strong new laws

Most strikingly, the draft legislation would outlaw some forms of AI that human rights groups see as most invasive and unethical. That includes a broad range of AI that could manipulate our behaviour or exploit our mental vulnerabilities – as when machine-learning algorithms are used to target us with political messaging online.

Likewise, AI-based indiscriminate surveillance and social scoring systems will not be permitted. Versions of this technology are currently used in China, where citizens in public spaces are tracked and evaluated to produce a trustworthiness “score” that determines whether they can access services such as public transport.

The EU also looks set to take a cautious approach to a number of AI applications identified as high-risk. Among these technologies are large-scale facial recognition systems – considered easy to deploy using existing surveillance cameras – which will require special permission from EU regulators to roll out.

Contentious facial recognition AI is regarded as ‘high-risk’ by the EU. Photo credit: Reuters

Many systems known to contain bias also classify as high-risk. AI that assesses students and determines their access to education will be tightly regulated – such technology achieved notoriety after an algorithm unfairly determined UK students’ grades in 2020.

The same caution will apply to AI used for hiring purposes, such as algorithms that filter applications or evaluate candidates, as well as financial systems that determine credit scores. Similarly, systems that assess citizens’ eligibility for welfare or judicial support will require organisations to make detailed assessments to ensure they meet a new set of EU requirements.

To give it some teeth, and in line with the EU’s existing punishment for serious data misuse, the AI regulations include fines for infringements of up to €20 million or 4% of global turnover, whichever is higher.

Reckoning with AI

Globally unique and sweeping in its application, the proposed regulation is a clear statement from Europe that it prioritises citizens’ fundamental rights over technical autonomy and economic interests.

But there are also concerns. Some will argue the measures go too far, stifling Europe’s AI innovation. The White House in fact warned Europe not to overregulate AI in 2020, with the US aware that China’s relative lack of protections could see it achieve a competitive advantage over its rivals.

On the other hand, privacy advocates and campaigners against bias in AI may be left disappointed. Some of the most problematic AI systems are excluded from the regulation, notably those used for military purposes, such as drones and other automated weapons – again speaking to fears of Chinese dominance in weaponised AI.

It is also possible that other applications, such as the fusion of AI with existing mass surveillance capabilities, could be permitted where authorised by law. This would leave the door open for their use in law enforcement, which is exactly the area that some observers are most worried about. Such loopholes for AI-driven state surveillance systems will trouble human rights and privacy advocates.

Contested definitions

Critics have highlighted the vague definition of AI detailed in the draft legislation, which focuses in particular on machine learning but may not apply to the next generation of computing technologies, such as quantum computing or edge computing. As always with legal documents, the devil will be in the detail.

Equally, there are open questions about the distinction between high-risk and low-risk AI. The regulations only apply to the former, but it’s not clear whether it’s always possible to determine the nature of AI’s risks during the development cycle. Risk is a continuum, and a dichotomy between high and low will always require an arbitrary distinction which may cause problems down the line.

The regulation is no doubt a bold step in the right direction. It will now be reviewed by the European Council and the European Parliament. The process of reading, reviewing and agreeing will likely take some time, during which the questions raised here can be explored and attended to.

But it stands to reason that many of the building blocks of the regulation will persist. By standing firm against forms of invasive surveillance and bias-prone AI systems, the legislation is a strong reminder that Europe takes seriously its obligation to safeguard its citizens’ fundamental human rights in a period of disruptive technological change.

Bernd Carsten Stahl is Professor of Critical Research in Technology at the De Montfort University.

This article first appeared on The Conversation.