It seems that although the internet is increasingly drowning in fake images, we can at least take some stock in humanity’s ability to smell BS when it matters. A slew of recent research suggests that AI-generated misinformation did not have any material impact on this year’s elections around the globe because it is not very good yet.
There has been a lot of concern over the years that increasingly realistic but synthetic content could manipulate audiences in detrimental ways. The rise of generative AI raised those fears again, as the technology makes it much easier for anyone to produce fake visual and audio media that appear to be real. Back in August, a political consultant used AI to spoof President Biden’s voice for a robocall telling voters in New Hampshire to stay home during the state’s Democratic primaries.
Tools like ElevenLabs make it possible to submit a brief soundbite of someone speaking and then duplicate their voice to say whatever the user wants. Though many commercial AI tools include guardrails to prevent this use, open-source models are available.
Despite these advances, the Financial Times in a new story looked back at the year and found that, across the world, very little synthetic political content went viral.
It cited a report from the Alan Turing Institute which found that just 27 pieces of AI-generated content went viral during the summer’s European elections. The report concluded that there was no evidence the elections were impacted by AI disinformation because “most exposure was concentrated among a minority of users with political beliefs already aligned to the ideological narratives embedded within such content.” In other words, amongst the few who saw the content (before it was presumably flagged) and were primed to believe it, it reinforced those beliefs about a candidate even if those exposed to it knew the content itself was AI-generated. It cited an example of AI-generated imagery showing Kamala Harris addressing a rally standing in front of Soviet flags.
In the U.S., the News Literacy Project identified more than 1,000 examples of misinformation about the presidential election, but only 6% was made using AI. On X, mentions of “deepfake” or “AI-generated” in Community Notes were typically only mentioned with the release of new image generation models, not around the time of elections.
Interestingly, it seems that users on social media were more likely to misidentify real images as being AI-generated than the other way around, but in general, users exhibited a healthy dose of skepticism.
If the findings are accurate, it would make a lot of sense. AI imagery is all over the place these days, but images generated using artificial intelligence still have an off-putting quality to them, exhibiting tell-tale signs of being fake. An arm might unusually long, or a face does not reflect onto a mirrored surface properly; there are many small cues that will give away that an image is synthetic. Photoshop can be used to create much more convincing forgeries, but doing so requires skill.
AI proponents should not necessarily cheer on this news. It means that generated imagery still has a ways to go. Anyone who has checked out OpenAI’s Sora model knows the video it produces is just not very good—it appears almost like something created by a video game graphics engine (speculation is that it was trained on video games), one that clearly does not understand properties like physics.
That all being said, there are still concerns to be had. The Alan Turing Institute’s report did after all conclude that beliefs can be reinforced by a realistic deepfake containing misinformation even if the audience knows the media is not real; confusion around whether a piece of media is real damages trust in online sources; and AI imagery has already been used to target female politicians with pornographic deepfakes, which can be damaging psychologically and to their professional reputation as it reinforces sexist beliefs.
The technology will surely continue to improve, so it is something to keep an eye on.