AI Fakes Spread Disinformation. Is the Distrust They Create Even Worse?
Less than a day after President Donald Trump falsely suggested that Ilhan Omar had staged an attack on herself, the images started to circulate. In AI-generated fake photos that soon flooded both X and Facebook, the Minnesota representative is depicted posing next to the man who invaded a town hall meeting and sprayed apple cider vinegar on her from a syringe. In the AI-generated images, Omar and the man are both smiling; in some, the congresswoman is foisting a wad of cash, presumably to suggest that she bribed her attacker.
It’s extremely easy to trace not just the fact that these photos are fake, but how: one widely circulated image simply replaces a woman Omar’s attacker posed with in a separate Facebook photo with the congresswoman. And while the pictures were cartoonish and strained credulity—would someone engaged in a conspiracy with their attacker really pose with him holding a fistful of bribe money?—in two distinct senses, they worked. A false narrative soon broadly took hold on the right that Omar had planned the attack on herself; the fake photos have often been used on social media as further “proof” that the event wasn’t real. But even when the images weren’t taken to be definitively real, they still were effective at creating some useful amount of uncertainty about what might actually be true, and discouraging people from trying to find out.
“People have a very difficult time figuring out what is real and what is true.”
Dmytro Iarovyi is an associate professor at the Kyiv School of Economics who studies disinformation, propaganda, and “disinformation resilience.” Iarovyi, who is also a researcher at Vytautas Magnus University and a visiting scholar at Harvard, explains that the “sustained experience of living through disinformation changes people’s capacity to participate meaningfully in democratic life… In fact, it’s one of the major tasks of modern disinformation—not to persuade people in something, yet to discourage them, turn them into passive, tired, exhausted mob.”
In the United States, a strategic lawsuit against public participation, or SLAPP suit, is one that is filed to silence one’s critics or scare journalists away from covering a story. What we’re seeing now could be termed “strategic memes against public participation”—images designed to confuse, sow doubt, and chill public engagement with political issues.
Take, for instance, a discussion that ensued under a Facebook post about Omar’s assault from Ted Howze, a failed 2020 GOP California congressional candidate whose party support faded after he was found to have made bigoted posts against Black people and Muslims. “The attack was a staged production,” Howze wrote above a fake photo of Omar. “Don’t fall for it.”
A few people in Howze’s comments pushed back, noting that the image appeared to be AI generated. A majority believed it to be real. But a third camp simply weren’t sure, and asked where the photo had come from or sought other contextualizing information not readily available in an unhinged Facebook comments section.
“Interesting,” one person wrote, “so hard to tell with all the abilities to add and change anything in a photo nowadays.”
“I think the first pic is fake,” another chimed in. “She is wearing the same sweater. BUT the others might be real.”
Fake images now attach themselves to virtually every global news event. Take, for instance, a spate of AI images claiming to depict Jeffrey Epstein, either showing him alive and well in 2026 or pictured with people we don’t know him to have associated with. One image shared on X by an obscure YouTuber claimed to show the dead convicted sex criminal walking in Tel Aviv; Hebrew speakers pointed out that road signs in the image were gibberish, among several tells that the image was fake. Nonetheless, the tweet has been viewed over 3 million times.
But that Epstein picture, where nonsense text immediately points to the image being false, is increasingly an exception, warns Georgetown University’s Renée DiResta, a social media researcher and globally recognized expert on propaganda and disinformation.
In the last few months, DiResta says, when it comes to AI-generated photos and audio, “We have crossed the threshold of it being virtually impossible for people to tell........
