We’re Already Living in the Post-Truth Era
You don’t need to see a fake image for it to affect your mind.
This is Atlantic Intelligence, a limited-run series in which The Atlantic’s leading thinkers on AI will help you understand the complexity and opportunities of this groundbreaking technology. Sign up here.
For years, experts have worried that artificial intelligence will produce a new disinformation crisis on the internet. Image-, audio-, and video-generating tools allow people to rapidly create high-quality fakes to spread on social media, potentially tricking people into believing fiction is fact. But as my colleague Charlie Warzel writes, the mere existence of this technology has a corrosive effect on reality: It doesn’t take a shocking, specific incident for AI to plant doubt into countless hearts and minds.
Charlie’s article offers a perspective on the dustup over an edited photograph released by Kensington Palace on Sunday of Kate Middleton and her children. The image was immediately flagged by observers—and, shortly thereafter, by wire services such as the Associated Press—as suspicious, becoming the latest bit of “evidence” in a conspiratorial online discourse about Middleton’s prolonged absence from the public eye. There’s no reason to suspect that the image is fully synthetic. But in the generative-AI era, any bit of media might be. “It’s never been easier to collect evidence that sustains a particular worldview and build a made-up world around cognitive biases on any political or pop-culture issue,” Charlie writes. “It’s in this environment that these new tech tools become something more than reality blurrers: They’re chaos agents, offering new avenues for confirmation bias, whether or not they’re actually used.”
— Damon Beres, senior editor
Kate Middleton and the End of Shared Reality
By Charlie Warzel
If you’re looking for an image that perfectly showcases the confusion and chaos of a choose-your-own-reality information dystopia, you probably couldn’t do better than yesterday’s portrait of Catherine, Princess of Wales. In just one day, the photograph has transformed from a hastily released piece of public-relations damage control into something of a Rorschach test—a collision between plausibility and conspiracy.
For the uninitiated: Yesterday, in celebration of Mother’s Day in the U.K., the Royal Family released a portrait on Instagram of Kate Middleton with her three children. But this was no ordinary photo. Middleton has been away from the public eye since December reportedly because of unspecified health issues, leading to a ceaseless parade of conspiracy theories. Royal watchers and news organizations naturally pored over the image, and they found a number of alarming peculiarities. According to the Associated Press, “the photo shows an inconsistency in the alignment of Princess Charlotte’s left hand”—it looks to me like part of the princess’s sleeve is disappearing. Such oddities were enough to cause the AP, Agence France-Presse, and Reuters to release kill notifications—alerts that the wire services would no longer distribute the photo. The AP noted that the photo appeared to have been “manipulated.”
What to Read Next
- What to do about the junkification of the internet: “Social-media companies define how billions of people experience the web,” Nathaniel Lubin writes. “The rise of synthetic content only makes their role more important.”
- Why we must resist AI’s soft mind control: “When I tried to work out how Google’s Gemini tool thinks, I discovered instead how it wants me to think,” Fred Bauer writes.
- We haven’t seen the worst of fake news: “Deepfakes still might be poised to corrupt the basic ways we process reality—or what’s left of it,” Matteo Wong writes.
P.S.
AI may play a role in how social-media companies patrol their platforms in the lead-up to the election. “Meta has started training large language models on its community guidelines, to potentially use them to help determine whether a piece of content runs afoul of its policies,” my colleague Caroline Mimbs Nyce wrote in an article last week about the steps tech companies could take to tamp down on political extremism. “Recent advances in AI cut both ways, however; they also enable bad actors to make dangerous content more easily, which led the authors of [a recent NYU report on online disinformation] to flag AI as another threat to the next election cycle.”
— Damon
What's Your Reaction?