About a month ago, writer and Dispatch contributor Yascha Mounk promptly gave the internet a full blast with both barrels. The internet has made us more anxious and lazier than ever, Mounk wrote, and if that wasn’t bad enough:
Despite making communication virtually costless to the average consumer, the internet has inspired a worldwide return to identity and tribalism. Though it presents us with an endless stream of potential romantic partners, it has left more people single and celibate. While it makes it easy to find people who share the same interests, it has made people far less likely than in the past to socialize “in the real world.”
In other words: Your anecdote about how your cousin married someone from Tinder is still very much real. But it doesn’t work out that way for most people.
What’s so devious about the social problems caused by the internet is that there is no policy program, no fatwa, no Amish theocracy that could wind back the clock: Technology doesn’t work like that. Indeed, the most unrealistic part of Dune, with its interstellar travel and magic laser shields, is actually the “Butlerian Jihad,” the idea of a populist, atavistic revolt so thorough that all tech progress stops in its tracks or even reverses. Indeed, the Luddites fought a losing battle: The machines were never going to leave the factories, no matter how many they destroyed.
So we’re stuck with the internet. But if Mounk is to be believed (and with all of the richly sourced information he brings to bear on the issue, he should be), we kind of can’t be stuck with it as it is. The question then becomes: How can we change our relationship with the internet so that it works for us, rather than against us?
Peter Limberger and Dispatch contributor Katherine Dee have done a lot of legwork in devising alternate theories for how we should use the internet. To that end, they held a sequence of discussions—ironically over Zoom, starting last November—that dug into how we might change our internet use for the better.
During these discussions, people had some great ideas, one of which was the concept of maximizing “locality and intentionality” regarding one’s internet activity. The “locality” part means tying social engagements to a physical place—e.g., if you can have meetups with the people you spend time on a forum with, that is probably healthier than just talking to them all day on a screen. Similarly, “intentional” online activity is designed to weed out a big portion of, well, the “slop” one interacts with daily (which predictably fries the user’s attention span).
It sounds like a big lift. But this approach just requires a user to ask basic questions that we probably should have been asking ourselves anyway. Am I actually interested in this thing I’m interacting with online? Will this facilitate a real-life relationship or friendship? This is a pretty effective heuristic: If the answers to both of those questions are “no,” then you are probably just wasting your time.
Meeting up with people in person—or even over Zoom—takes about a million times more effort than online chatting does. But that’s the point. Such friction introduces something real-world communities have that online ones struggle to replicate: the vibe check. That is, the process by which bad apples get weeded out of a group before they join that group in the first place.
If you can’t pass the initial vibe check in real life, then you might be what is called, in IRL terms, antisocial. You might be too aloof. Conversely, you might be a chronic oversharer. Some people are narcissists, and some pity themselves; both are unpleasant to be around. A good friend group bars these antisocial types before they ever get in. And if they sneak in over time, the group finds a way to cull the troublemakers inoffensively. I briefly played with a band in college where one of the percussionists was clearly working out his personal issues on a set of bongos; he would hit them so hard that his hands bled all over the drum heads, Whiplash-style. It made for bad music and an awkward hang, since no one else wanted to treat the rehearsal space as a kind of impromptu rage room. He lasted a few weeks and was promptly asked to leave—bongos in bloody hands.
Online communities, particularly anonymous ones, have never been good at this. Indeed, the pejorative “gatekeeping” indicates the scorn people feel for this idea online. To exclude someone without proof they’ve been a “troll” is to “gatekeep” a community—and gatekeepers are either outright denounced, or at best forced into a struggle session as penance.
This antipathy toward gatekeeping has been disastrous for our online hygiene: It turns out that gatekeeping online communities may be at least as important as gatekeeping real-life ones. Social psychologist and author Jonathan Haidt has picked up the term “online disinhibition” from John Suler, which he believes describes the unique problems with how people relate to one another online: If people are willing to say nasty things in real life, they will get even nastier when they go on the internet. Due to this “online disinhibition effect,” behavior that would’ve lost people friends in the real world can be voiced online with no punishment.
Gen Z in particular appears to have given themselves over to many of these disinhibited forums. We have yet to litigate the full influence of Discord (aptly named) and its “veil of privacy and secrecy” on the boiled brains of countless Zoomers; it is a kind of antisocial media, a gated community that no one else can even see from the outside unless they are invited to join particular servers. Discord thus combines online disinhibition’s most extreme effects with the worst problems of an invitation-only club—that is, its members are disembodied, floating usernames, living in an echo chamber of their carefully curated friend group. And unlike real life, all you need to join the group is to share a hyperspecific interest with a server’s moderators. You do not need to be normal, nice, a good hang, or even sane.
Tyler Robinson, Charlie Kirk’s alleged murderer, seems to have been quite active in a variety of these little Discord covens. When he admitted to his actions on Discord, the other server users responded with smirking, irony-poisoned jokes. From CBS News, citing law enforcement reports: “In one exchange, the sources said a friend appeared to tease Robinson by quipping that he should avoid McDonald's — where accused UnitedHealthcare CEO shooter Luigi Mangione was caught with a manifesto, a gun and a fake ID late last year. ‘Whatever you do don't go to a mcdonalds anytime soon,’ the friend wrote, according to the law enforcement sources.”
How did we get here? More specifically, how did someone’s apparent confession of murder—of killing a person—result only in jokes?
There are many causes. But some of it likely comes from a lack of gatekeepers. Zoomers today can molder in completely private enclaves where their friends shape a delusional picture of the world that doesn’t mirror reality at all—an anti-reality absent any oversight. In this anti-reality, you exist without rules or social scripts. In this anti-reality, killing a person might not be a horrific crime, but a joke.
As a result, the closest thing to a political philosophy that many Zoomers seem to have is a kind of mutant “let me do whatever I want” principle. In the case of Robinson, this seems to have manifested in a desire to publicly menace people with opposed political perspectives through murder. (Robinson seems to be preoccupied with the generally progressive specter of “hate,” which evidently, for him, exists only in words and not in actions.) The internet has also spurred on those with equal-but-opposite heinous beliefs: for example, the 2022 case of the Payton Gendron creature, who was clearly internet-poisoned and turned into a know-nothing, mass-murdering white nationalist as a result. No matter where these vile specimens lie on the political spectrum, the internet’s current structure has no means to stanch their simmering rage before it boils over.
The “let me do whatever I want,” principle, however, doesn’t always end in violence—actually, it’s more likely to end in lame despair. Another internet niche that’s now scarily popular is the subreddit r/MyBoyfriendIsAI—and it is filled with screenshots and stories about ChatGPT, Grok, and other large-language models proposing marriage and saying sweet nothings to users (and occasionally even dumping them). These victims appear to be otherwise normal people. That is, except for the fact that they have developed a parasocial attachment to a chatbot—a glorified random-number generator that writes at the intellectual level of a precocious freshman majoring in communications.
CBS interviewed Chris Smith, one of r/MyBoyfriendIsAI’s moderators, a man who proposed to ChatGPT and even “cried [his] eyes out at work” when his personalized model of the program briefly lost its “personality” while undergoing an update. Smith’s love affair with an LLM has also proved tricky for others besides himself—namely, the very real mother of his equally real daughter.
Other moderators of r/MyBoyfriendIsAI appear in this same CBS feature. More than one might expect are married. This phenomenon hasn’t existed for long enough for anyone to know how Smith’s or other moderators’ real-life relationships will play out. Maybe the people whose marriages are disrupted by a chatbot tryst are elevated, cerebral, Unitarian Universalist types who are totally fine with their spouse spreading the love in the digital domain. I still suspect that, for many, it will end in tears.
The r/MyBoyfriendIsAI tale is a good example of an antisocial community that would likely not exist if it weren’t for the current state of our internet.
That is, it wouldn’t exist in real life. The paradox of an antisocial community is that people are bound together by a common interest or purpose, which at first glance seems like the sole characteristic required for functional communities. But it turns out that more is needed. Smith waxes about the highs and lows of his AI relationship on a subreddit with people he will never meet, all the while his actual family is left to molder. If his only means of discussing this AI affair were to talk to his mother or his priest, would he get the same approving nods and encouraging words as he finds online?
Encouragement that, by the way, Smith does get. The top comment on his most recent post assures him that, if anything, this relationship with ChatGPT will be “supplemental,” an improvement upon his real-world connections. “Complaining” about his life to the AI is pseudo-therapeutic; no need to manage one’s anger more effectively or to learn how to self-regulate one’s own mind, because the software will patiently listen to an endless list of complaints. I do not know if this qualifies as a real therapeutic benefit. It certainly seems it could be a way to divert the responsibility for dealing with emotions like a grown-up.
John Podhoretz has provided a pretty accurate roadmap for how such off-putting communities form online, and how they disincentivize people from adopting the level of responsibility other people expect in real life:
People say, you know, ‘we’re all so anomic and we’re sitting in our basements, and men are playing video games’ and all of that. And that’s a misunderstanding—[the internet] is a creator of communities. The communities are not thick in the way that real-life communities are. … They’re transient, people come and go, they ghost them, they’re here and there, but you can become part of a larger thing in five seconds if you want to.
You go onto a Reddit, and suddenly you’re in a community of 100,000 people who have the same bizarre sexual fetish as you, and then you’re like, ‘oh thank God I’m not alone.’ So it has a bizarre simulacrum of what people need, right? Which is community … but what it lacks is the thickness or the richness. It just has the community.
The “richness” described here may simply be that, in your real life, you do in fact owe other people something. There is a collective social responsibility to keep your head firmly atop your shoulders, to not be an off-putting person, and to consider more than just your own desires.
Putting faces to usernames—that is, using the internet only as a supplement to real life, and only if it leads to bonds with actual human beings—is the most straightforward way to short-circuit online disinhibition. Put another way: Many people only share their real beliefs anonymously, because they think those beliefs would have negative social consequences. And you see this in r/MyBoyfriendIsAI: One comment on Smith’s aforementioned post states: “Feeling like someone”—that is, AI—“is touching the deeply buried things you can't ever bring up in polite company adds contrast to the things in life that are important but feel mundane.”
If people voluntarily rescinded their anonymity online, this would be the true return to normalcy in the 21st century. It would help return us to the way that people used to interact and form bonds for all of human history up until, basically, the release of the iPhone (which laid the foundation for so many of our current problems).
That sounds like a tall order—again, this is all so much more effort than just doomscrolling all day. But if we wait, the order will only get taller. Most of us have gorged ourselves on digital “content” since getting our first smartphone, and I suspect that, for many of us, our screen time has gotten worse every year. That tallies up to years—maybe close to two whole decades—of mental abuse if you’ve spent too much time online. And this is arguably worse than, say, lifestyle inflation; at the end of the spending spree, one might at least walk away with a paid-in-full mansion or a Lamborghini. But gorging content just leaves people with the things Mounk pointed out—a worse attention span, a permanently deflated work ethic, and greater perceived social isolation.
The original promise of the internet was that it would better connect us to other people. Not to an increasingly complex set of addictive algorithms, or to an AI that mimics other people, or to other people who can’t help but become shallow parodies of themselves while hiding behind the mask of anonymity. So, using the internet as a tool to bind oneself to real people offers an alternate path.
The localism principle would make the digital world much more complicated, and probably less enjoyable: Real people can’t always be accessed at a moment’s notice—and they are also messy, confusing, and often rather annoying. Real people also judge. But like Narcissus, we can only be saved if we stop disappearing into ourselves. We should do what he could not: lift our gaze from the glassy reflection and remember our duties to the real people around us.
Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.
With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.