Skip to content
How the Supreme Court Could Reshape Social Media
Go to my account

How the Supreme Court Could Reshape Social Media

Internet companies might lose some protection from legal liability for user-generated content.

(Photo by Jakub Porzycki/NurPhoto via Getty Images)

[UPDATE February 22, 2023: When the Supreme Court heard oral arguments in Gonzalez v. Google Tuesday, justices seemed hesitant to strike down broad legal protections for internet providers whose users post illegal content. But in a related case Wednesday, Twitter v. Taamneh, justices grappled with how to hold internet companies responsible for users pushing prohibited content, particularly when it comes to spreading terrorist ideology. 

During Tuesday’s arguments (see The Morning Dispatch for more details) justices on both ends of the ideological spectrum seemed concerned about what a shakeup to Section 230 of the Communications Decency Act might mean for internet companies.

“Lawsuits will be nonstop,” if the court rules against the tech companies, Justice Brett Kavanaugh said. Justice Amy Coney Barrett also questioned whether average social media users would be in legal peril if they retweeted or liked content that faced legal challenges. Justice Elena Kagan suggested Congress, not the judiciary, should tackle the issue: “These are not like the nine greatest experts on the internet.”

Arguments in Wednesday’s case addressed whether internet companies are liable for aiding and abetting terrorism for hosting terrorism-related content under the Anti-Terrorism Act. Both liberal and conservative justices were more confrontational to Big Tech’s lawyers, seeming sympathetic to arguments that Twitter could have done more to boot terrorists from accessing its services. “Willful blindness … can constitute knowledge,” Justice Sonia Sotomayor said.

Yet the justices still expressed caution about the ramifications—and feasibility—of holding internet platforms responsible for user-generated content. If the court isn’t clear in determining cause and effect in particular cases, “then it would seem that every terrorist act that uses this platform would also mean that Twitter is an aider and abetter in those instances.” Justice Clarence Thomas said. 

Rulings are expected by the start of the summer.]

Lawmakers in both parties have laid the groundwork in recent years to limit broad legal protections U.S. tech companies currently enjoy around content their users create. While those efforts continue, the Supreme Court could get the first bite of the apple.

A pair of cases on the court’s current docket, Gonzalez v. Google and Twitter v. Taamneh, question when tech giants can be held civilly liable for content posted to their platforms. In each case, the initial plaintiffs were American family members of people killed in terrorist acts abroad. They sued tech giants—Google in the former case; Google, Facebook, and Twitter in the latter—over allegedly permitting content supportive of terrorism to proliferate on their social media sites. 

Under existing case law, these would be open-and-shut cases under Section 230 of the 1996 Communications Decency Act. The key provision of Section 230, which is often described as a statutory pillar of the modern internet, consists of 26 words: “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.”

In keeping with that provision, lower courts have long ruled that internet companies that host user-generated content can’t be held liable as accessories for crimes users used their platforms to commit. But these matters have never been litigated at the Supreme Court. Given the high court’s willingness to take up the cases—rather than allow existing jurisprudence to stand on its own—there’s reason to believe the conservative majority could chip away at existing protections for the tech giants. 

Justice Clarence Thomas in particular has made his hostility to Section 230 liability protections known. He has argued repeatedly in public that the court should “address the proper scope of immunity under §230 in an appropriate case.”

Neither case has been scheduled for oral arguments yet. But the justices’ decision in Gonzalez will turn on how they answer the following question: Under Section 230, internet companies aren’t liable for terrorist or terrorism-abetting speech others make on their platforms, but if such speech is amplified and targeted to other users by a company’s proprietary algorithms, does that targeting make the speech the company’s own?

That’s the argument the family of Nohemi Gonzalez, a 23-year-old American student killed in an ISIS-linked terrorist attack in Paris in 2015, have made in their lawsuit against Google. “Mere posting on bulletin boards and in chat rooms was the prevalent practice when Section 230 was originally enacted,” the plaintiffs argued when asking the Supreme Court to hear the case. “But over the last two decades, many interactive computer services have in a variety of ways sought to recommend to users that they view particular other-party materials.” They continued that “those recommendations are implemented through automated algorithms, which select the specific material to be recommended to a particular user based on information about that user that is known to the interactive computer service.”

By serving up terrorist-adjacent content on YouTube to users likely to be sympathetic to such content—even if that happens without any human at Google intending the algorithm to function that way, and despite the fact there was no evidence the particular perpetrators of the Paris attack had been radicalized in this way—Google had functionally abetted the rise of ISIS, they claimed.

Two lower courts rejected that argument on familiar Section 230 grounds. Usage of “content-neutral algorithms,” wrote Judge Morgan Christen of the 9th Circuit Court of Appeals, “does not expose [a website] to liability for content posted by a third-party. Under our existing case law, §230 requires this result.” But the Gonzalez family hopes to find a more sympathetic audience at the Supreme Court. Their petition even explicitly references Justice Thomas’ stated desire to find “an appropriate case” to reevaluate the current status of Section 230 protections: “This is that case,” they write.

The second case, Twitter v. Taamneh, has an unusual relationship with Gonzalez. On the merits, the two cases are largely the same. As in Gonzalez, the original plaintiffs in what was then Taamneh v. Twitter were family members of a terror attack victim: Nawras Alassaf, a Jordanian citizen killed in a 2017 ISIS-affiliated attack on a nightclub in Istanbul. As in Gonzalez, the plaintiffs argued tech companies should be held responsible as abettors of the crime given how the terrorist group had used their platforms. Here, too, the alleged connection was general rather than particular: The plaintiffs did not accuse the companies of being implicated in the specific attack that killed Alassaf, but simply in allowing ISIS to flourish in the first place.

What distinguishes Taamneh from Gonzalez is how the two cases made their way through the lower courts. In Taamneh, a district court ruled in the tech companies’ favor without making reference to Section 230’s liability shield. Rather, the court found the alleged connection between the tech companies and the act of terror too tenuous to even consider Section 230 protections.

The 9th Circuit disagreed. It ruled that the companies’ alleged acts—failing to clamp down on terrorist-adjacent content to the full extent of their ability—plausibly constituted abetting under the Antiterrorism Act of 1990. And because the district court had not established whether the companies should qualify for Section 230 immunity in Taamneh, the circuit court reversed and remanded the case back to the lower court to consider that question further.

This put the social media companies in an odd position. Their case in Taamneh was going back down to the district court to relitigate the Section 230 issue, which they stood a good chance of winning. Except at that exact time, Gonzalez was going up to the Supreme Court to kick the tires on the same issue. As a result, Twitter directly petitioned the Supreme Court to take up its case too, but conditionally: “The Court should deny review in [Gonzalez],” the companies wrote, “but if the Court grants review there, it should grant in this case as well.” (For the curious, this was the moment when Taamneh v. Twitter became Twitter v. Taamneh.)

If Gonzalez is the case where Section 230 liability protections are to be litigated, then, Taamneh—the case that so far has been litigated without any consideration of that shield—will concern what comes next. Even if the court decided to curtail companies’ Section 230 protections in Gonzalez, it could still rule that the Taamneh plaintiffs’ legal argument of platform culpability under the Antiterrorism Act is a stretch, agreeing with the district court.

If Section 230 protections are limited in Gonzalez, Taamneh is likely to be the first of many cases exploring where the new liability lines are—and how much trouble the tech companies are likely to be in.

“If it’s the case that online services can be held liable for what algorithms recommend to users, then I think search as a function is impossible,” Matthew Feeney, head of technology and innovation at the London-based Centre for Policy Studies, said. “If the answer to the question is yes, you can be held liable for the content, I think the internet as we know it, it’s going to look a little different.”

Andrew Egger is a former associate editor for The Dispatch.

Harvest Prude is a former reporter at The Dispatch.

Share with a friend

Your membership includes the ability to share articles with friends. Share this article with a friend by clicking the button below.

Please note that we at The Dispatch hold ourselves, our work, and our commenters to a higher standard than other places on the internet. We welcome comments that foster genuine debate or discussion—including comments critical of us or our work—but responses that include ad hominem attacks on fellow Dispatch members or are intended to stoke fear and anger may be moderated.

You are currently using a limited time guest pass and do not have access to commenting. Consider subscribing to join the conversation.

With your membership, you only have the ability to comment on The Morning Dispatch articles. Consider upgrading to join the conversation everywhere.