The Supreme Court is hearing oral arguments today in Google v. Gonzalez, a landmark case about Section 230 and content on the internet. Section 230 protects platforms from being held liable for content users post on their sites.
Although a number of observers worried about how the court might handle the case—potentially upending how websites provide and display user-generated content—the justices seemed to grasp the statute, its implications, and signal support for upholding Section 230, a fundamental principle of the modern internet.
In its first round of questions directed at the lead counsel for the Gonzalez family, whose daughter was killed by ISIS terrorists in Paris and sued YouTube parent company Google over its recommendation of ISIS videos, the justices’ comments and questions indicated that they don’t believe Google’s recommendations algorithm rises to the level of “aiding and abetting” terrorists and the company is protected under Section 230.
When questioning began, Justice Clarence Thomas, whose previous comments suggested he thinks it’s time to reconsider 230, appeared to believe that algorithmic recommendations are protected under the statute, noting that they tend to be content-neutral, and algorithms that recommend recipes for rice are not different from algorithms that may have recommended ISIS videos.
In response to questioning, the Gonzalez family’s attorney, Eric Schnapper, claimed that Google lost its immunity by creating thumbnail images for videos, which made them a publisher and opened them up to liability.
Justices didn’t seem to buy this argument. At one point, Justice Samuel Alito said that he was completely confused by it, and that by calling thumbnails unprotected content under 230, YouTube might be liable for every single thing posted on its site, given how widely they are used.
“Our contention is the use of thumbnails is the same thing under the statute as sending someone an email and saying, ‘You might like to look at this new video now,’” Schnapper responded.
However, justices’ comments suggested they also don’t believe that recommending videos, or providing information, like a phone number for a known terrorist, makes Google liable for users’ actions.
“There has to be some intent to aid and abet,” Justice Sonya Sotomayor said. “You have to have knowledge that you’re doing this. So how do you get there?”
The justices noted that Google isn’t designing its algorithms to recommend ISIS videos; rather, it is reflecting users’ desires.
Justices also seemed cognizant of the potential impact of overriding Section 230. Justice Elena Kagan noted how these algorithms are embedded throughout the internet, hypothesizing that search engines could be held liable for their rankings if Section 230 is overturned.
Justice Brett Kavanaugh noted the number of amicus briefs the court received that highlighted how much their decision could upend the internet, negatively impact on the economy, and said that the court needs to take that into consideration.
Questioning is still underway in the hearing. The United States government is arguing its position that 230 should be upheld, but that companies could and should be held liable for recommendations.
In questioning Google’s lead counsel, Lisa Blatt, Justice Ketanji Brown Jackson noted that platforms seem to have taken Section 230 beyond its logical extreme, noting that part of the statue, when originally written, offered protections to companies trying to remove content in good faith to clean up the internet.
But instead of blocking and screening content, Brown Jackson said that “YouTube was creating a separate algorithm that pushes [content to] the front, that more people see it than otherwise, how is that conceptual consistent?” with the intent of part of 230. Brown Jackson noted that if Section 230 were read that way, it only offers a narrow scope of immunity that doesn’t protect recommendation algorithms.
The Supreme Court will hear a similar case, Twitter v. Taamneh, tomorrow. It is expected to issue its ruling on the two cases in the fall.