Turn any article into a podcast. Upgrade now to start listening.
Members can share articles with friends & family to bypass the paywall.
In October 2025, Meta CEO Mark Zuckerberg announced to shareholders that social media had entered a new, third phase.
“First was when all content was from friends, family, and accounts that you followed directly,” he said. “The second was when we added all of the creator content. Now as AI makes it easier to create and remix content, we’re going to add yet another huge corpus of content.”
Weeks later, Reuters reported that in 2024, Meta earned $16 billion—10 percent of its revenue—from ads for scams and banned goods, which rely on AI tools to appear legitimate. One company called Mabel and Daisy, for example, used an AI-generated image of a mother and daughter to advertise “timeless clothing” from a shop in Bristol, England. But it turns out the company was selling poorly made goods from Hong Kong and forcing consumers to pay extortionate return fees.
Many similar scams are still active on Facebook and use AI-generated videos and images to manipulate users. Some seem obvious to spot—yet the latest generation of AI models is producing AI visuals which are far harder to immediately recognize as inauthentic.
Direct scams from fake companies are not the only AI-generated content flooding timelines. The 2025 Word of the Year according to the Merriam-Webster dictionary was “slop”—a nod to low-quality AI-generated content flooding social media. Of the content a freshly opened YouTube account is now shown, 21-33 percent of the content is reportedly “low-quality AI video” according to AI company Kapwing.
With the growing speed, scale, and sophistication of AI-generated content, experts aren’t just worried about social media users believing things that aren’t true. They’re worried about an ever-growing and deep-seated skepticism that makes people all but sure to disbelieve what’s true.
Daniel Atherton, an editor at the AI Incident Database, a crowdsourced database of media reports on AI incidents, fears that conversations about deepfakes remain centered around single, high-visibility cases. In early 2022, a deepfaked video of Ukrainian President Volodymyr Zelensky surrendering to Russia made headlines, as did a 2024 AI-deepfaked robocalls impersonating Biden. In the latter case, thousands of New Hampshire Democrats got messages two days before the state’s January 23 presidential primary encouraging them not to vote.
While certainly concerning, Atherton says that sole focus on such cases may obscure the more ambient harms of AI-generated media. “We tend to learn about deepfakes through high-visibility incidents, usually those that pass muster as ‘newsworthy,’ even as they are increasingly functioning as an infrastructural problem embedded in everyday systems of trust and decision-making,” he told The Dispatch.
“For most of my life I could safely assume that the vast majority of photographs or videos that I see are largely accurate captures of moments that happened in real life,” Instagram head Adam Mosseri warned in a recent post. “This is clearly no longer the case and it’s going to take us, as people, years to adapt.”
Researchers have been warning about skepticism spillover for years, along with the risks of believing claims that something is AI-generated even when it is not.
“Warnings about deepfakes can be a double-edged sword,” Simon Clark, a research psychologist at the University of Bristol in the United Kingdom, told The Dispatch. “On the one hand, they can reduce belief in fake videos, which is exactly what they’re meant to do. On the other hand, they can also reduce trust in real videos, making people more skeptical overall … If real videos are too easily labelled as ‘probably fake,’ that can undermine accountability and trust.”
Recent international events show how this uncertainty is being weaponized. During protests in Iran, a verified video of a protester refusing to move as Iranian security forces on motorbikes swept the streets appeared online. When an AI-enhanced version of the same video was shared, accounts associated with the Iranian regime began to dismiss the videos as fake, pointing to the visible touch-ups. Russian and Iranian regime accounts are also reportedly posting anti-Islamic Revolutionary Guards Corps AI-generated content, and then immediately “exposing” it as evidence of a foreign operation by the Americans or Israelis.
Part of the problem is that AI disinformation “introduces just enough ambiguity to cause real damage before verification or institutional response is even possible,” Atherton said. Reports from his database of malicious actors using AI to spread disinformation and scam victims have risen eightfold since 2022 (and that is probably only a fraction of actual occurrences, according to Atherton).
What also worries researchers is that such actions are increasingly automatable.
Last week, 22 leading experts from across the globe published a paper in Science, warning that AI agents could be used to manipulate “beliefs and behaviors on a population-wide level” and that a single user could control malicious “AI swarms” to mimic humans and threaten democracy. These AI agents are capable of “coordinating autonomously, infiltrating communities and fabricating consensus efficiently,” the paper’s authors wrote. “By adaptively mimicking human social dynamics, they threaten democracy.” Instead of hundreds of Russian workers manually commenting on news articles to try to influence Americans, AI could operate autonomously without oversight, at a greater scale.
Meanwhile, AI-generated video is getting more and more believable. Lea Marchl, a reporter at the news verification company NewsGuard, noted a sharp increase in AI involvement in false claims following the advent of Sora, an AI video-generation tool produced by OpenAI and released to the public in December 2024. The quality of its videos has also skyrocketed. “In the past two weeks for scenes representing Minneapolis, I’ve really been shocked at how high-quality they are compared to how it was two months ago,” she said.
McGregor argues that good journalism will become more valuable as truth becomes harder and harder to discern. “Institutions whose primary purpose and concern is producing trustworthy information are likely to be the only thing that can establish truth via the digital media,” he told The Dispatch.





Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.