Skip to content
The ‘Twitter Files’ Show It’s Time to Reimagine Free Speech Online
Go to my account

The ‘Twitter Files’ Show It’s Time to Reimagine Free Speech Online

Platforms tried speech codes. There’s a better way.

Twitter owner Elon Musk and his Twitter profile. (Photo by Muhammed Selim Korkutata / Anadolu Agency via Getty Images.)

A few years ago I was invited to an off-the-record meeting with senior executives at a major social media company. The topic was free speech. I’d just written a piece for the New York Times called “A better way to ban Alex Jones.” My position was simple: If social media companies want to create a true marketplace of ideas, they should look to the First Amendment to guide their policies. 

This position wasn’t adopted on a whim, but because I’d spent decades watching powerful private institutions struggle—and fail—to create free speech regulations that purported to permit open debate at the same time that they suppressed allegedly hateful or harmful speech. As I told the tech executives, “You’re trying to recreate the college speech code, except online, and it’s not going to work.” 

I’ve been thinking about that conversation ever since Elon Musk took over Twitter, and particularly since Matt Taibbi and Bari Weiss last week began releasing selected internal Twitter files at Musk’s behest. These files detail, among other things, Twitter’s decisions to block access to a New York Post story about the contents of Hunter Biden’s laptop ahead of the 2020 election, Twitter’s decision to eject Donald Trump from the platform, and the ways in which Twitter restricted the reach of tweets from a number of large right-wing Twitter accounts. 

The picture that emerges is of a company that simply could not create and maintain clear, coherent, and consistent standards to restrict or manage allegedly harmful speech on its platform. Moreover, it’s plain that Twitter’s moderation czars existed within an ideological monoculture that made them far more tolerant toward the excesses of their own allies. 

In other words, Twitter behaved exactly like public and private universities in the era when speech codes ruled the campus. 

At the risk of oversimplifying history, here’s the short story of modern university censorship. As American universities grew more diverse, a consensus emerged in universities both public and private that schools should strive to create a “welcoming” environment for students and faculty, with particular attention paid to protecting students from discrimination on the basis of protected categories such as race, sex, sexual orientation, and gender identity.

Federal and state laws required colleges and universities to protect students from harassment on the basis of protected characteristics. But schools wanted to go further. They wanted to make sure that students and faculty were protected from psychological discomfort. The speech code was born.

At the same time, however, schools were still eager to proclaim their support for academic freedom and free speech. So the message to the campus community boiled down to something like this—all speech is free except for hate speech. But what was hate speech? The definitions were broad and malleable.  

Temple University, for example, banned “generalized sexist remarks.” Penn State University declared that “acts of intolerance will not be tolerated,” and defined harassment as “unwelcome banter, teasing, or jokes that are derogatory, or depict members of a protected class in a stereotypical and demeaning manner.” 

One of the worst speech codes I ever read was enacted at Shippensburg University, a public university in Pennsylvania. The policy was remarkably broad: “Shippensburg University’s commitment to racial tolerance, cultural diversity and social justice will require every member of this community to ensure that the principles of these ideals be mirrored in their attitudes and behaviors.” 

It doesn’t take a legal genius to realize that these speech rules were so broad that they granted administrators extraordinary power over free speech. Combine that power with the ideological blinders that are inherent to any political monoculture, and you have a recipe for staggering double standards in censoring political and religious speech. I could fill an entire newsletter with stories of such abuses.

Back in my litigation days, I led legal teams that followed a few simple rules. First, public institutions must comply with the First Amendment, and they should be sued if they don’t. Second, private universities have the freedom to craft their own rules, but if they promise free speech, they should deliver, and there is no better model for delivering free speech than the First Amendment.

The same message should apply to social media. As a private company, you can choose to become, say, a “progressive social media platform” or a “website for Christian connection and expression” and govern yourself accordingly. But if you hold yourself out as a place that welcomes all Americans, then you’re courting disaster if you depart from the lessons learned from constitutional law.

To be clear, to say that First Amendment principles should guide private platforms is not the same thing as saying “anything goes” any more than protecting the First Amendment on campus creates chaos. Far from it. Campuses must and do protect individuals from targeted harassment, for example, and they can use reasonable time, place, and manner regulations to channel speech into particular places and specific hours of the day.

For example, it’s one thing to yell, “Trump 2024!” on the quad in the middle of the day, it’s another thing entirely to walk up and down the halls of a dorm at 2:00 a.m. yelling the same thing. Yelling in the quad is free speech, while interrupting sleep in the dark of night can be a form of harassment. 

To take another example of appropriate speech restrictions, while there are sharp limits on the ability of the government to regulate pornography, it can absolutely restrict access to graphic images when children are present. For example, the FCC prohibits “obscene, indecent, and profane broadcasts” on the radio and network television. 

The FCC exercises authority over radio and television network content because the federal government controls access to airways. It grants licenses to use the finite number of frequencies available. It does not exercise that same control over subscription services, which is why prime-time programming on CBS looks very different from prime-time programming on HBO. 

But not even the FCC has the power to prefer one political point of view over the other. If it promulgated regulations that granted Democrats preferential access, they’d be struck down immediately. The reason is related to the core principle of the First Amendment, a core principle that social media platforms should adopt as well: viewpoint neutrality. 

The principle of viewpoint neutrality means that any regulation of speech, including time, place, and manner regulations, should be crafted and enforced without regard to the underlying viewpoint of the speaker. The same rules apply to Democrats and Republicans alike, to Christians and atheists, to soldiers and pacifists. The same rules apply even to people who hold the most reprehensible viewpoints, including communists and fascists. 

Along with viewpoint neutrality, there’s another key constitutional principle that’s critical to maintaining the marketplace of ideas—clarity. Rules that are vague or overbroad can chill free speech every bit as effectively as a rule that specifically targets disfavored speech for censorship. Even otherwise-acceptable time, place, and manner regulations can be unlawful if they grant to public officials too much discretion to restrict speech. 

How does all this apply to Twitter, Facebook and every other large social media platform on the planet? First, it means giving up the quest for a free speech utopia and embracing viewpoint neutrality. There is no way to create any meaningful free speech environment that allows for actual debate while protecting participants from hurtful ideas or painful speech. Executives at Twitter or Meta are no better than college administrators at crafting the perfect speech code. The brightest minds have already made that effort, and even the brightest minds have failed.

Second, it means moderating on the basis of traditional speech limits. Even institutions that embrace viewpoint neutrality will place limits on speech. They’ll have to. If there is one thing we know from decades of experience with the internet is that completely unmoderated spaces can and do become open sewers that are often unsafe for children and deeply unpleasant for adults. Unmoderated spaces can become so grotesque that they’re simply not commercially viable.  

“Viewpoint neutral” is thus not a synonym for “unmoderated.” Consistent with viewpoint neutrality, a platform can impose restrictions that echo offline speech limitations. Defamation isn’t protected speech. Neither is obscenity. Harassment is unlawful. Invasions of privacy (doxxing, for example) should face sanctions. Threats and incitement violate criminal law. A platform can say, “Children are present. No nudity.” 

It is easy to imagine different rules that make it easier to talk about issues and harder to target individuals. Examples of viewpoint-neutral time, place, and manner regulations that could prevent, for example, some of the worst conduct on Twitter could include limiting or eradicating the quote-tweet function, limiting the visibility of replies to other users’ tweets, or limiting the ability of users to reply or interact with tweets of people they don’t follow. 

Third, it means embracing clarity and transparency. Make rules clear. Create an appeals process when users are penalized. No human institution is ever going to apply its rules perfectly, and accountability is necessary. Secrecy in decision-making can impair trust every bit as thoroughly as flaws in the substance of the decisions made.

Indeed, one of the interesting lessons of the last few years is that social media censorship is both divisive and ineffective. It often backfires. In a free society, attempts to censor speech often create a demand for that speech. Twitter censoring the Hunter Biden story, for example, didn’t squelch its reach. Internet searches for Hunter Biden skyrocketed after Twitter took action.

The idea that censoring speech can have the opposite effect is so well-known that it has a term—the Streisand Effect. In 2003, Barbra Streisand sued to have a picture of her home removed from an internet site. At the time she filed the suit, the image had only been downloaded a grand total of six times (twice by her lawyers). After her suit hit the news, the image was downloaded 420,000 times in a single month.

The reality of the Streisand Effect can create perverse incentives. Bad actors will intentionally court suspensions or flirt with outright bans to generate attention and sympathy. 

I don’t believe that Twitter or any other social media company monopolizes the marketplace of ideas. I also believe they have the right to set their own policies. If Twitter wants its moderation policy to simply be, “Elon Musk decides,” then it’s Twitter’s right to set that policy. I don’t have to use the service, and I can take my speech to countless other platforms to share my excellent takes on Aquaman and Ja Morant. 

Universities—even private universities—eventually learned an important lesson in free speech. The latest speech code survey by the Foundation for Individual Rights and Expression (FIRE) indicates that only 18.5 percent of surveyed universities have a “red light” (clearly speech-restrictive) speech policy. That’s down more than 50 points from 2009, a year I filed multiple free speech lawsuits against public universities. 

Social media companies should take note. The upheaval caused by Elon Musk’s Twitter takeover—along with the controversy generated by the “Twitter Files”—represent an ideal opportunity for a free speech rethink. New platforms can benefit from old principles, and when it comes to managing a marketplace of ideas, centuries of First Amendment jurisprudence can help light the way.  

One more thing …

In the latest Good Faith podcast, Curtis and I had a great time talking to bank robber and lawyer Jesse Wiese. Yep, that’s right—he’s a bank robber and a lawyer. In the podcast he tells his remarkable story—how his life changed behind bars, and we talk about the church’s role in criminal justice reform, including discerning the difference between justice and vengeance. It’s a fascinating conversation. Give it a listen

One last thing …

It’s still Christmas season, so this week you’re getting one of my favorite versions of “O Come, O Come Emmanuel.” It’s by King and Country, and it’s different. Enjoy!

David French is a columnist for the New York Times. He’s a former senior editor of The Dispatch. He’s the author most recently of Divided We Fall: America's Secession Threat and How to Restore Our Nation.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.