In December, Prime minister Narendra Modi addressed a Tamil-speaking audience in Varanasi. He spoke in Hindi. But the audience was not listening to his voice – they had earphones plugged in. While Modi was speaking in Hindi, the audience was hearing an artificial intelligence-generated rendition of his speech in Tamil. The voice was not Modi’s but the words were.

It was one of the latest attempts by the Bharatiya Janata Party to connect with voters of a state that has a history of strong anti-Hindi imposition politics and has traditionally rejected the party.

What made Modi’s translation into a Tamil-speaking AI voice possible was “Bhashini”, an application developed by the Indian government. Available for public download, the application aims to bridge language divides in a linguistically diverse country.

Around the same time in Pakistan, incarcerated former prime minister Imran Khan was pulling off something more spectacular. Lodged in prison and unable to communicate with supporters ahead of Pakistan’s general elections, Khan’s messages were being relayed using artificial intelligence-generated audio clips.

Unlike Modi’s experiment, Khan’s AI clip closely resembled his voice. It gave Pakistani voters the sense that despite being barred from contesting elections, their leader was amidst them. Khan used AI to conduct virtual campaign rallies while in jail too.

The result was there for the world to see. Even though Khan’s Pakistan Tehreek-e-Insaaf was denied the use of its election symbol and candidates were forced to contest as independents, they bagged a majority of the seats in Pakistan’s general elections.

The Pakistan Tehreek-e-Insaaf has been using digital tools and social media to reach out to supporters since 2013 during Khan’s sit-in at Islamabad. AI was the next logical step.

Though AI is in its initial stages in terms of political adoption, these two instances exemplify how it is shaping up and is in turn likely to shape politics going forward.

New tech, growing adoption

Sajjad Haider, a professor of computer science at Institute of Business Administration in Karachi, Pakistan, doubts that Modi’s speech was translated in real-time using AI. He believes the speech was recorded before it was delivered and then translated and relayed to the crowd on their earphones.

“I do not think any politician will completely rely and trust AI when their career and reputation is on the line,” said Haider. “It may be unable to translate the words, emotions and nuances of the language being used in real-time.”

Indian news outlets had claimed that Modi’s speech was translated into Tamil in real-time. Technology experts in India, however, confirmed that a prepared version of the Hindi speech had already been fed into Bhashini to keep the translation ready.

“Similarly, Imran Khan’s speech relied on pre-written content which was recorded by cloning his voice,” said Haider. “It is too tricky to use AI to do something like this live.”

Either way, the technology is being deployed in innovative ways.

Though the debate about the use of AI has largely focused on its deleterious impact on democratic processes, the technology is giving voice to the voiceless if not the powerless.

Senthil Nayagam, a startup founder based in Chennai, said that what earlier required a Hollywood movie budget can now be pulled off at a fraction of that cost. “AI has levelled the playing field for smaller political parties who do not have the financial resources commanded by bigger parties,” said Nayagam.

In the US and other countries, political parties spend on television advertisements, he said. “Now, you can do that with AI.”

Nayagam, the founder of generative AI startup Muonim Inc, came into the limelight this year after his company recreated a lifelike digital version of M Karunanidhi, one of the tallest politicians in Tamil Nadu from the ruling Dravida Munnetra Kazhagam. Karunanidhi died in 2018 but an AI version of him delivered an eight-minute speech at the book launch of a party member in January.

Political parties are tapping into the power of creating hyper-personalised messages at a fraction of the cost and reaching millions simultaneously.

Nighat Dad, a lawyer and digital technology expert, said that AI played a crucial role in Pakistan in forming public opinion through social media platforms, online spaces and WhatsApp.

Dad, who is executive director of Lahore based Digital Rights Foundation and a member of social networking site Meta’s Oversight Board, said political parties used social media platforms to deliver speeches, hold digital rallies and raise funds.

A senior member of an Indian political campaign management organisation that has worked with various parties said that major parties have access to all kinds of voter data that can be used to send personalised messages.

“The party in power will have more granular details, but the entire profile of a voter at the booth level is available to most parties,” said the senior member. Using AI, a party can send personalised messages directly to a voter’s mobile phone, he said. “The connect of a local politician addressing a voter by their name makes a huge difference.”

In Pakistan, such dedicated outreach made a huge difference during the elections.

Dad said that political parties used AI to analyse huge amounts of voter data that helped create detailed voter profiles. “Access to the preferences, interests, and demographics of different voter categories meant that political parties could organise targeted and precise campaigns,” said Dad.

This proved decisive for the Pakistan Tehreek-e-Insaaf after its symbol – a cricket bat – was derecognised in January, forcing candidates to contest on individual symbols which could have confused voters. The party sent details of candidates and their allotted symbols to voters in their specific constituencies.

Similarly, constituency-wise details of all voters parsed with AI assistance were sent to all candidates so that they could reach out individually to voters at the grassroots level. The same technology was then deployed to connect Khan in jail with the voter base across the country.

The former Pakistani prime minister has been forthright in his claims on using AI to expound his views while he was imprisoned. For instance, The Economist in January published an opinion piece claiming to be written by Khan. The newspaper did not publish a disclaimer, but Khan said that the article was written using AI-enabled tools by feeding inputs he had dictated to his visitors in prison.

Technology, thus, helped the Pakistan Tehreek-e-Insaaf overcome a chaotic and hostile political environment in Pakistan.

Politics and business

In India’s startup ecosystem, there is no dearth of companies willing to provide such services. There is growing interest, but also hesitancy.

Companies that provide AI services to political parties may have to walk a thin line between generating informative content and participating in a slugfest that often hinges on peddling falsehoods and personal attacks. “I have no reason to create content which will create problems for me,” said Nayagam. “False and negative content is what we want to stay away from.”

The owner of another company that makes AI content for the advertising industry and Hindi film industry, said that doing business with a political party makes their regular non-political clients uneasy. “The election business is just a few days a year. Parliamentary polls once every five years,” she said. “There is no point risking our core customers who give us most of our business around the year by doing deals with political parties.”

Companies that do business with one political party may find that rival parties keep away. Being associated with one party and its ideology can be damaging for business – more so when a political client fails to gain or regain power.

Business sense aside, experts say that the use of AI in politics should be guided by ethical practices.

A group of Indian AI companies, including Dubverse and Polymath Synthetic Media Solutions and Muonim, have issued an “Ethical AI Coalition Manifesto”. The manifesto emphasises that the “integrity of the democratic processes” are upheld by ensuring that “AI technologies are not used to manipulate elections, spread misinformation, or undermine public trust in political institutions”. It states that AI tools deployed in the political arena must be “transparent, accountable, and free from bias”.

AI risks and voters

While politicians have been quick to leverage AI, there have been few efforts to help voters make an informed and enlightened choice amid the proliferation of such technology.

Rakesh Dubbudu, the founder of data journalism and information portal Factly, has taken the lead here. Dubbudu and his team have launched a new platform “tagore.ai”, which he describes as a “credible information ecosystem with the power of AI”.

The platform has organised information from sources like parliamentary records, government databases, budgetary speeches, historical government press releases among others, enabling an insight into government policy.

The portal, yet to be released for wider use, offers detailed information about the Centre’s government’s welfare schemes, analysis of election results with a simple query, verbatim texts of the previous promises made by politicians and access to historical parliamentary questions along with a trove of official financial information.

“The biggest risk with AI today is hallucinations,” said Dubbudu, referring to generative models providing false or made-up information. With Tagore AI, Dubbudu and his team are trying to reduce and minimise the risk of hallucinations while curating the database.

Dubbudu said that providing accurate and credible information with relevant source links can help save hours of research time that would otherwise be spent sifting through results on search engines and other sources. “All this is very relevant in the context of democracy where speedy access to credible information and sources is critical,” he said.

Even so, there is scepticism and concern with polling set to begin in India by mid-April. State assembly elections held months before had witnessed the use of deep-fake videos and other multimedia content to target political opponents.

The Global Risks Report 2024, released by the World Economic Forum in January, warned that AI-fuelled misinformation was a common risk for India and Pakistan. For India, misinformation was the biggest threat.

Platforms are already putting in place measures anticipating a flood of misinformation.

In January, Open AI said it was working on tools that will “empower voters to assess an image with trust and confidence in how it was made.” A month later, Meta announced the launch of a fact-checking helpline in India.

Dad, who is part of Meta’s Oversight Board, said that a WhatsApp helpline is a welcome measure with its support for English and three other languages, but it will still fall short. “With only four languages covered, many people will not be able to red flag misinformation, disinformation, and deepfakes because they speak a different language,” she said.

The use of deepfakes has underscored the urgent need for collaborative efforts to address the role of AI in electoral disinformation, said Dad. In Pakistan, for instance, there were several deepfake videos, some of which called for boycotting the polls.

Of course, political parties can also use AI positively by reaching out to voters with their agenda, what they have done, what they promise to do, how they wish to solve problems and encouraging higher voter turnouts.

Dubbudu said these are early days but there is an increasing risk of misuse that may prove to be detrimental to free and fair elections.

Lubna Jerar Naqvi is a journalist based in Karachi, Pakistan

Sai Manish is a journalist based in New Delhi, India.