From photoshopped celebrity nudes to fabricated scenes of war atrocities, artificial intelligence has unleashed a powerful new force warping our grasp on reality and truth.
A comprehensive new study exposes just how rapidly AI-fueled misinformation has infiltrated the internet’s digital airwaves.
Researchers Expose Meteoric Rise of AI Misinformation
In a preprint paper published online last week, researchers from Google, Duke University, and major fact-checking organizations like Snopes compiled a first-of-its-kind dataset.
It encompasses a staggering 135,944 misinformation claims that were debunked by fact-checkers between 1995 and November 2023.
Their analysis reveals a jarring development – after years of AI-generated imagery being functionally non-existent in the fact-checking world, its prevalence suddenly spiked in early 2023. What was once a rounding error ballooned into a substantial portion of misinformation cases almost overnight.
“The sudden prominence of AI-generated content…suggests a rapidly changing landscape,” the researchers wrote. “AI-generated images made up a minute proportion of content manipulations overall until early last year.”
“We go through waves of technological advancement that shock us in their capacity to manipulate reality,” said Alexios Mantzarlis, misinformation expert at Cornell Tech. “We’re going through one now with AI’s democratization letting anyone create and spread fakes. The question is whether we can adapt safeguards quickly enough.”
Deceptive Real Images Still Dominate – But For How Long?
While the rise of AI-generated misinformation and image fakery is eye-opening, traditional misinformation tactics remain far more common overall for now.
Around 80% of visual misinformation stems from genuine images or videos taken out of context and repackaged with false framing.
“Regardless, generative-AI images are now a sizable fraction of all misinformation-associated images,” the study warned. The exponential growth curves suggest AI fakery could rapidly eclipse other methods.
Intriguingly, the researchers noted a recent dip in fact-checks of AI imagery cases by late 2023. However, experts like Sasha Luccioni of AI company Hugging Face aren’t convinced the underlying problem is slowing.
“I feel like this is because there are so many [AI fakes] that it’s hard to keep track!” Luccioni said. “I see them regularly myself, even outside of social media like in advertising.”
The Viral Fakes Fooling Celebrities & Public
Helping drive AI misinformation into the mainstream are recent viral hoaxes reaching astonishingly high-profile audiences. Last spring, bogus images depicting Pope Francis wearing an outlandish puffy coat rocketed across the internet.
More recently, explicit fake nudes of singer Taylor Swift proliferated online before content strikes brought them down. But not before fooling unknown numbers. The images were created using AI image generation tools from OpenAI and Microsoft.
Katy Perry was also at the center of an AI-fueled case of mistaken identity. Fabricated pictures of the pop star attending the 2023 Met Gala in New York fooled her own parents and many fans, despite Perry never actually being there.
“These images are highly shareable…they don’t require replicating the false claim,” the study said, calling out screenshots as a common misinformation vehicle.
From Video to Search – Multi-Front War on Reality
Beyond static imagery, the analysis highlighted how AI manipulation is also rapidly distorting video and fueling misinformation. Around 60% of debunked claims now involve doctored or deceptive video footage.
AI imagery fakes are even making verified Google Image results increasingly unreliable. Optimized content farms have been adept at surfacing computer-generated celeb fakes for search queries, harnessing the technology for profit.
Google’s counter-initiatives like digital watermarking and ranking penalties for detected AI may be too little too late. As one Google spokesperson admitted, “When we find low-quality [AI] content ranking highly, we build scalable solutions…not just for one query, but for many.”
The Essence of Truth Itself Is Under Siege
The weaponization of AI imagery for misinformation underscores how cutting-edge tech can rapidly undermine our shared reality and trust in authoritative sources.
“If Big Tech collaborated on an AI watermarking standard, that would help,” Luccioni said. “But a bigger question looms: What happens when the real and AI-generated are indistinguishable?”
As AI capabilities explosively advance, society may need to redefine how we consume reality itself – and what we’re willing to take as truth.