Fight over social media’s role in terror content goes to Supreme Court
WASHINGTON - The Supreme Court on Monday said it will hear a case that tests the limits of Section 230, the U.S. legal provision that protects social media companies from liability for what third parties post to their sites.
The high court’s decision in the case, which involves Google’s alleged responsibility for terrorist propaganda on its subsidiary YouTube, could have long-lasting ramifications for how internet sites treat users’ posts.
The case was brought by the family of Nohemi Gonzalez, a 23-year-old student who was killed in a 2015 ISIS terrorist attack in Paris. The suit alleges that Google’s YouTube “aided and abetted” ISIS, in part by allowing its algorithms to recommend video content from the terrorist group.
Section 230 was passed in 1996 and is credited with helping lay the groundwork for the internet as we now know it. It broadly immunizes websites and online platforms, including social media sites like YouTube, Facebook and Twitter, from being held responsible in civil lawsuits for what their users post.
The law has sparked controversy for years, heating up significantly during the Trump administration, when the president pointed to the law as supposedly enabling social media companies to “censor” conservatives online.
Politicians on both sides of the aisle have called for reforms to 230, including President Biden.
“The entire scope of Section 230 could be at stake, depending on what the Supreme Court wants to do,” said Jeff Kosseff, a cybersecurity law professor at the U.S. Naval Academy and the author of a book on Section 230, “The Twenty-Six Words That Created the internet.”
Relatives of Gonzalez contend that YouTube used its computer algorithms to recommend ISIS videos to users who might be interested in them, using the information the company collects about users. YouTube and many other social media companies use such algorithms to keep people engaged on their sites by showing them posts, videos, photos and other content similar to material they have already viewed.
The complaint alleged that officials at parent company Google were aware its technology was aiding ISIS.
“Videos that users viewed on YouTube were the central manner in which ISIS enlisted support and recruits from areas outside the portions of Syria and Iraq which it controlled,” the plaintiffs alleged in their request for the Supreme Court to consider the case.
Google has argued that “the complaint does not allege that any terrorists saw such a recommendation or that such recommendations had any connection to the Paris attack.” In court documents, the company argued that this case was not the appropriate way to consider Section 230.
The U.S. Court of Appeals for the Ninth Circuit agreed that “230 barred most of plaintiffs’ claims.”
The Supreme Court’s action marked the first time the court will directly evaluate Section 230, said Eric Goldman, a professor and co-director of the High Tech Law Institute at Santa Clara University School of Law. It sets up the court to possibly draw a line between social media’s processes of manually recommending content versus using algorithms, which he called a “false dichotomy.”
“The question presented creates a false dichotomy that recommending content is not part of the traditional editorial functions,” he said. “The question presented goes to the very heart of Section 230 and that makes it a very risky case for the internet.”
The Supreme Court also said Monday it would consider a separate but related lawsuit involving Twitter. That case was filed by relatives of Nawras Alassaf, who was killed in a terrorist attack in Istanbul in 2017. The claim accused Twitter, Facebook and Google of violating the Anti-Terrorism Act by allowing ISIS to use their sites.
The lower courts did not directly address the issue of Section 230 in this case, however, and Twitter asked for the Supreme Court to consider the case if it also heard the Google case.
- - -
The Washington Post’s Will Oremus contributed to this report.