For artificial intelligence engineers who have cared to look beyond their computer screens to study social structures and the short history of democracy, the last few months have been increasingly unsettling.

Something akin to the atomic bomb on humanity may have been unleashed, but that comparison is limited to a vague sense of the destructive abilities of these technologies. The inability to contemplate all the different types of monsters AI can morph into, will, in the near future, force democratic societies to reimagine socio-political lives at all levels: individual, national and international.

The nuclear chain reaction was a theoretically well-understood process before the bomb was made and there are only a handful of ways to achieve it, which leave a physical trace behind.

AI models, on the other hand, are probabilistic and can be trained to perform as yet undefined tasks with little traceability, making it difficult to even predict how these engines will be repurposed, let alone imagine their societal impact. While regulations generally stifle innovation, the alarm bells AI industry leaders are sounding stem from this instinctive understanding that nobody really knows what has been unleashed, but knowing that it will upend life as we know it.

At the level of an individual, researchers are admitting that AI “hallucinations” – the tendency to state false information as facts – may not be something that can be fixed. In their attempt at anthropomorphising AI, specifically the large language models that underlie ChatGPT and similar products, engineers are trying to portray that large language models are inherently a source of truth and then attributing the human tendency of occasionally hallucinating to it. This is troubling on multiple levels.

Large language models are not truth engines and their output is merely a reflection of the data used to train them. The current versions seem to be truth engines because they were trained by “well-meaning” engineers with no sinister intentions, but large language models are inherently “truth-agnostic”: they are only predicting the probability of a certain word following any given word.

A large language model trained with a mountain of lies, which is easy to do, will confidently and overwhelmingly spit out falsehoods, converting the harmless-sounding “hallucinations” into reality for a society that is not interested in the truth to begin with.

Credit: Pixabay.

Even in societies interested in the truth, large language models lead to a fundamental paradox: automating content creation is their raison d’être, but human intervention is necessary to prevent hallucinations. If large language models are accepted as a fact of life, we can either make them extremely inefficient with human oversight – deploying an army of fact checkers in inexpensive countries like the social media companies do – or live with the consequences of its occasional lies. This leads to liability quagmire.

Democratic societies and modern jurisprudence are based on observable truths and verifiable facts. Individual responsibility is an important construct of it. Even if technology moguls and neuroscientists argue that there is no free will, lies and violent actions of individuals harming others have hitherto attracted civil and criminal liabilities.

Until now, social media companies have sidestepped such liabilities by claiming that they are mere distributors of content created by users, which rings hollow in the case of large language models and generative AI. As these algorithms seep into all spheres of economic activity, new legal theories of human-machine liability will have to be developed for when things go wrong, as they invariably will.

It might lead to scaling down the ambitions of AI engineers and constraining scenarios in which such engines can be used, although it is not entirely clear whether it will prevent hallucinations. At the very least, such constraints will reduce the impact of AI hallucinations.

The implications for democratic nation-states are even more dire. Copyright was the legal tool established to help creators monetise their work product. As Hollywood writers are on strike to guard their turf, Japan has already declared that using copyrighted material to train AI will not violate copyrights. Clearly, existing frameworks do not offer any easy solutions. Even if fees are imposed to use copyrighted material as training data, AI companies will find ways to bypass such costs by using copyright-free data and the latest training data generation techniques.

While a Creative Commons license covered the non-profit use of an artist’s work product, regulations to cover its for-profit use will most likely fail, leading to philosophical questions of how art should be monetised and, more importantly, how democratic societies value art and what measures they are willing to take to support artists.

The National Lottery Heritage Fund of the United Kingdom could serve as a useful model. Taxing socially agreed-upon vices, which could include tobacco and alcohol taxes, could financially support cultural heritage and the creation of art in the future. However, such changes will lead to a massive realignment of the financial incentives in most creative domains. For nation-states, this is only one, and a somewhat nebulous, industry segment to worry about. AI will most likely lead to upheaval in several industry verticals, deepening economic inequality.

Ideas such as algorithm taxes or robot taxes have been discussed in policy circles for the past few years with no clear path forward. Society will have to develop new constructs of the financial worth of a job, which is fraught with major ethical dilemmas.

Credit: Thomas Ulrich via Pixabay.

Theoretically, ideas from the actuarial tables in the insurance industry to determine the monetary value of a life could be borrowed to put a dollar amount on a lost job and levy taxes on algorithms taking the jobs away. In reality, this opens up a Pandora’s Box of seemingly intractable issues of innumerable types of jobs, their value in different geographies, and redistribution of such taxes.

These issues strike at the heart of regulations and laws which, at least in democratic societies, are primarily meant to prevent harm, not promote virtue or serve any other purpose. Will we be willing to accept pro-active and pre-emptive regulations? Will such regulations entail some central bureaucratic planning committees that free societies have come to detest? There might not be answers today, but if the accelerating deployment of AI rapidly worsens inequality, the resulting social unrest might force our hand, upending democratic institutions as we know them.

Perhaps the biggest shockwaves of AI will be felt in the international arena, weakening the hand of democracies against autocracies. In the essay Federalist 10, one of America’s founding fathers James Madison argued that violence of a faction poses a significant threat to democracies. The geographical and cultural diversities, enlightened elected representatives, the relatively raucous House and more deliberative Senate offering a sense of continuity, and the independent media were all designed to prevent or counteract easy assembly and violent actions of political extremists.

Social media has already removed several of these guardrails and AI, even if tightly regulated by democracies, is about to give an unfair advantage to autocracies. The images of former US President Donald Trump’s arrest generated by AI were a glimpse of it, but the problem is much deeper.

An atomic bomb is a direct, visible threat to societies that can obliterate human life on a large scale. Constructs like the Mutually Assured Destruction doctrine during the Cold War era acted as deterrents against unilateral actions. Large language models and generative AI, on the other hand, offer an advantage to societies with the infrastructure and the intent to control the flow of information. With such deadly tools, authoritarian regimes could engineer bloodless coups in democracies without leaving any trace behind.

In a theoretical “battle of information attrition” between the United States and China, the massive surveillance systems China has set up allow it to detect and quash information harmful to its regime more efficiently than the United States. Edge computing – the ability to push fully trained AI algorithms on small, mobile devices and deploy them globally – further weakens the hand of open societies. Remotely sowing lies and discord in open societies from the other end of the planet is a lot easier in the AI era.

While the democratic form of government has been in the minority globally since its inception, it has so far proven more resilient, making republics richer, more dynamic and nimbler than autocracies. Leaving aside the colonial-era plunder, these social contracts have brought immense prosperity to open societies.

However, to survive the AI era, democracies will have to come up with institutions and alliances to counter this misinformation armageddon. Perhaps, some independent, bipartisan bodies of businesses, bureaucrats, and civil society will have to move at warp speed to come up with solutions before the lights go out on democracies. Global action sounds good on paper and autocracies might even sign on to some new AI-era Geneva Conventions, but they will go behind our backs and violate them without leaving a trace.

As dangerous as it all sounds, we should be thankful that AI has not written such an essay – yet.

Mauktik Kulkarni is a neuroscientist, author, entrepreneur and filmmaker. He is the author of A Ghost of Che and Packing Up Without Looking Back.