Last week, Tamil Nadu Finance Minister Palanivel Thiaga Rajan claimed that viral audio clips of him purportedly accusing leaders of the ruling Dravida Munnetra Kazhagam of corruption and praising the Bharatiya Janata Party were fabricated with the help of advanced technology such as artificial intelligence, bringing to fore concerns about the potential misuse of such tools in politics.
AI tools, especially deepfake, are being increasingly used around the world to spread misinformation and influence politics. Deepfake is a technique for manipulating audio and video to show people saying or doing things that they never said or did. They are made to appear as realistic as possible and are usually used with malicious intent. Such tools have not been deployed in a significant way in India’s political arena thus far, experts said, but it may only be a matter of time.
It is, therefore, crucial to create public awareness about such tools, the experts added, how they can blur the line between reality and fiction, and how they can be abused to build false narratives and manipulate public opinion.
PTR vs BJP
In an audio clip that appeared online on April 19, Rajan is purportedly heard accusing Chief Minister MK Stalin’s son Udhayanidhi Stalin and son-in-law V Sabareesan of amassing ill-gotten wealth to the tune of Rs 30,000 crore in a year. Six days later, Tamil Nadu BJP chief K Annamalai shared another clip of Rajan purportedly praising the Hindutva party for its “one person, one post” structure.
Rajan disputed the authenticity of the first clip saying that it had been maliciously fabricated using “advanced technology”. On April 26, the day after Annamalai put up the second clip, the minister doubled down, claiming the audio clips had been generated using tools such as deepfake to discredit his party.
Annamalai responded that he was willing to face the law and prove that the audio clips were not fabricated. He also accused the ruling party of maliciously manipulating a clip from a podcast interview he had given and circulating it online.
A growing trend
This is not the first time AI tools have been employed for political ends in India. In February 2020, during the Delhi Assembly election, the BJP disseminated two deepfake videos of its leader Manoj Tiwari speaking in English and Haryanvi and attacking the Aam Aadmi Party. The original video, from December 2019, did not have him speaking about the Aam Aadmi party or the election. It had Tiwari congratulating the public, in Hindi, for the passage of the Citizenship Amendment Bill in Parliament.
The BJP claimed it was a victim of deepfake and condemned the use of such technology in politics. However, a functionary in the party’s social media cell, Neelkant Bakshi, told Vice that Tiwari’s deepfake videos had been shared in over 5,800 WhatsApp groups by the party itself and had reached around 15 million people.
Rajan, in fact, cited Tiwari’s deepfake videos to argue his point. “If such authentic-looking videos can be machine-generated,” the minister noted, “imagine what all can be done with audio files.”
In April 2021, the Gujarat Police arrested a 28-year-old man for creating a deepfake clip of Vijay Rupani, then the state’s chief minister, singing a song by American artist Taylor Swift. While in this instance it was not the handiwork of political rivals, fact-checkers pointed out that political parties usually disseminated such content to ridicule their opponents.
There have been several cases of such technology being misused against politicians elsewhere. In March 2022, after Russia had invaded Ukraine, a deepfake video went viral on social media showing Ukrainian President Volodymyr Zelenskyy asking his troops to surrender.
In February this year, Canadian online magazine The Post Millennial created a deepfake video of American President Joe Biden introducing conscription. Although the magazine described it as a pre-creation by AI of what Biden announcing conscription might look like, some media outlets in the United States as well as Biden’s political opponents shared the video and its screenshots on social media, alluding that the president had indeed announced conscription.
In March, Eliot Higgins, founder of the media website Bellingcat, tweeted a set of AI images of the New York Police arresting former United States President Donald Trump. While Higgins had made it clear that the images were fake, they were circulated online and projected as if they were real.
In April, Trump’s supporters circulated a deepfake video showing his Republican Party rival Ron DeSantis saying that “leadership is about fooling the voters”.
Such manipulations have prompted warnings that the 2024 American presidential election will be a “deepfake election”.
Uptick in India?
Should India be worried about the use of tools such as deepfake in the political arena?
The country has “always followed what the rest of the world has been doing in this space”, Karen Rebelo, deputy editor of the fact-checking platform Boom Live, said. “As of now, we are mostly seeing AI-generated images thanks to technologies like Midjourney AI which has made this technology accessible to the general public in a way that other AI apps didn’t. It still requires a certain level of tech savviness.”
The expected surge in misinformation generated through AI is a matter of worry. “This technology is outstripping any safeguard we have against misinformation,” Rebelo said.
Rebelo predicted that it was a matter of time before political parties used deepfake and similar tools in a significant way. “It’s ripe for exploitation and the potential for harm is immense,” he said. “Going forward, I think we can expect deepfakes of Indian public figures speaking in a local language.”
In fact, Rebelo said, this may already be happening at some level. “Such content has not surfaced yet, particularly on social media, but it may be already circulating elsewhere. Political parties will use proxies to spread such content and avoid the blame.”
This makes deepfake and similar tools a major challenge in the battle against political misinformation. “The problem is not identifying deepfakes,” Rebelo said. “The challenge is where it is circulating.”
He added, “The deepfakes we are seeing right now can be identified because they have anomalies that can be seen with naked eyes. So the technical part about identifying deepfakes is not hard right now. But with the technology getting better, you will reach a stage where these anomalies won’t exist.”
Standing guard
Public awareness, therefore, is key to stopping such misinformation from influencing electoral politics, Rebelo said. Detecting deepfakes should not be left to the public alone, he added, but people must be urged to carry out basic factchecking as they should with other forms of misinformation.
“Tricks such as checking links and motives of the person posting it that are used to identify cheap fakes will have to translate to deepfakes,” said technology and misinformation researcher Tarunima Prabhakar. Cheap fakes are altered media using conventional tools.
A few experts have been raising awareness about how deepfakes can be deceptively and maliciously used in politics for sometime now. In 2018, to raise awareness, BuzzFeed and actor Jordan Peele created a deepfake video showing former United States president Barack Obama referring to his successor Trump as a “dipshit”. In 2019, the research organisation Future Advocacy released a deepfake video showing Boris Johnson, then Britain’s prime minister, and his rival Jeremy Corbyn endorse each other ahead of the general election.
Additionally, Prabhakar asked for the Election Commission to play an active role in stopping malicious AI tools from being used to influence elections. “Tiwari was speaking two languages he doesn’t speak,” Prabhakar said, referring to the 2020 videos. “This was implying commonality with the voters and meant to score electoral points. This was misleading and the Election Commission should have acted on it.”
In fact, the Election Commission in January acknowledged concerns around deepfake and its impact on electoral processes. Chief Election Commissioner Rajiv Kumar said that “disruptive elements” were seeking to manipulate public perception using deepfake-based narratives. It takes time for voters to realise a fake, Kumar said, “and to a very large extent damage is done by that time”.
Prabhakar proposed a solution: prohibit media manipulation and the use of generative AI in electoral processes, and employ prebunking, debunking and collaborative fact checking as has been done in some countries. “Prebunking” refers to the practice of countering potential misinformation by warning people against it before it is disseminated.