In the final weeks of campaigning, Argentine President-elect Javier Milei published a fabricated image depicting his Peronist rival Sergio Massa as an old-fashioned communist in military garb, his hand raised aloft in salute.
The apparently AI-generated image drew some 3 million views when Milei posted it on a social media account, highlighting how the rival campaign teams used artificial intelligence technology to catch voters’ attention in a bid to sway the race.
“There were troubling signs of AI use” in the election, said Darrell West, a senior fellow at the Center for Technology Innovation at the Washington-based Brookings Institution.
“Campaigners used AI to deliver deceptive messages to voters, and this is a risk for any election process,” he told Context.
Right-wing libertarian Milei won Sunday’s run-off with 56% of the vote as he tapped into voter anger with the political mainstream – including Massa’s dominant Peronist party, but both sides turned to AI during the fractious election campaign.
Massa’s team distributed a series of stylised AI-generated images and videos through an unofficial Instagram account named “AI for the Homeland”.
In one, the centre-left economy minister was depicted as a Roman Emperor. In others, he was shown as a boxer knocking out a rival, starring on a fake cover of New Yorker magazine and as a soldier in footage from the 1917 war film.
Other AI-generated images set out to undermine and vilify Milei, portraying the wild-haired economist and his team as enraged zombies and pirates.
The use of increasingly accessible AI tech in political campaigning is a global trend, tech and rights specialists say, raising concerns about the potential implications for important upcoming elections in countries including the United States, Indonesia and India next year.
A slew of new “generative AI” tools such as Midjourney are making it cheap and easy to create fabricated pictures and videos.
With few legal safeguards in many countries, there is growing unease about how such material could be used to mislead or confuse voters in the run-up to elections.
“Around the world, these tools to create fake images are being used to try and demonise the opposition,” said West.
“While it is not illegal to use AI-generated content in hardly any country, images portraying people saying things they didn’t or making stuff up clearly crosses an ethical line.”
Political use
Most of the AI-generated images used in the Argentine election campaign were satirical in flavor, seeking to elicit an emotional reaction from voters and spread rapidly on social media.
But AI algorithms can also be trained on copious online footage to create realistic but fabricated images, voice recordings and videos – so-called deepfakes.
During the recent campaign, a doctored video that appeared to show Massa using drugs circulated on social media, with existing footage manipulated to add Massa’s image and voice.
It is a dangerous new frontier in fake news and disinformation, researchers say, with some calling for material containing deepfake images to carry a disclosure label saying they were generated using AI.
“Now they have a tool that allows them to create things from scratch, even though it’s evident that it may be artificially generated,” West said, adding that “disclosure alone does not protect people from harm.
“It is going to be a huge problem in global elections in the future as it will get increasingly harder for voters to distinguish the fake from the real,” he said.
Democracy risk
As AI-generated content becomes more accessible and more convincing, social media platforms and regulators are struggling to stay ahead, said disinformation researcher Richard Kuchta, who works at Reset, a group that focuses on the technology’s impact on democracy.
“It is clearly a cat and mouse game,” Kuchta said. “If you look at how misinformation works during an election, it is still pretty much the same. But ... it got massively upscaled in terms of how deceiving it can get.”
He cited a case in Slovakia earlier this year, in which fact-checkers scrambled to verify faked audio recordings posted on Facebook just days before the country’s September 30 election.
In the tape, a voice resembling one of the candidates appeared to be discussing how to rig the election.
“Eventually, the piece was dismissed as fake, but it did a lot of harm,” Kuchta said.
Meta Platforms, which owns Facebook and Instagram, said this month that from 2024 advertisers will have to disclose when AI or other digital methods are used to alter or create political, social or election related advertisements on the sites.
In the United States, a bipartisan group of senators have proposed legislation to prohibit “distribution of materially deceptive AI-generated audio, images, or video relating to federal candidates in political ads or certain issue ads.”
Additionally, the US Federal Election Commission wants to regulate AI-generated deepfakes in political adverts to safeguard voters against disinformation ahead of next year’s presidential election.
Other countries are leading similar efforts, though no such regulatory proposals have yet been presented in Argentina.
“We are still in the early stages of AI,” said Olivia Sohr, a journalist at the Argentine fact-checker nonprofit Chequeado, noting that most of the fake information circulated during the campaign involved fabricated newspaper headlines and false quotes attributed to a specific candidate.
“AI has the potential to elevate disinformation to a new level. But for now, there are other equally effective ways that fulfill their goals without necessarily being as expensive or sophisticated.”
This article first appeared on Context, powered by the Thomson Reuters Foundation.