Supreme Court Treads Carefully with Internet Algorithms

The history and implications behind the Gonzalez v. Google case


New York Times

Beatriz Gonzalez and Jose Hernandez, the mother and stepfather of Nohemi Gonzalez, are accusing YouTube of promoting terrorism videos that led to her killing.

Isadora Blatt, Editor-in-Chief

In November 2015, Nohemi Gonzalez, a 23-year-old American college student, was killed in the Paris terrorist attacks that left over 100 dead. In Gonzalez v. Google, a Supreme Court case that began on February 21, 2023, the Court heard the Gonzalez family’s appeal of a lower court’s decision to throw out the lawsuit. In the case, Nohemi Gonzalez’s family aims to sue YouTube (owned by Google) for promoting videos that inspired and radicalized the terrorists’ views. Although there is no evidence that the terrorists viewed such videos, they certainly exist and YouTube’s algorithm does cater this content to those who are interested.

As the case progressed, the family expanded their accusations to encompass how many platforms work: by catering content to viewers depending on their interests. Essentially, the family went up against all Internet publishing platforms as a whole. Should online platforms be held responsible for the content that they promote? Or do these platforms, by default, have no responsibility for their users’ words and actions?

The gravity of this issue centers around Section 230(c)(1) of the Communications Decency Act, enacted in the 1990s. Section 230 seeks to protect online platforms from liability, holding that an interactive computer service cannot be treated as the speaker of information published by a third-party user. Without it, online platforms can be sued for content that their users post, and would have to pay settlements. 

This nuanced immunity policy comes with many limitations. For example, Section 230 would not apply to a platform that harmed a user due to negligent design. It cannot protect platforms that directly contribute to illegal content, nor ones that fail to warn users of a known danger on the platform. 

Section 230, however, also states that online platforms are not liable for taking down users’ content that is deemed obscene or otherwise objectionable. In this way, it acts as somewhat of a compromise – platforms have the right to monitor the content their users are posting but are not held responsible for everything that is posted at a given point in time. 

This section was created in the context of the Internet’s infancy. Congress enacted it in response to accusations against a very early Internet platform called Prodigy, which wanted to moderate its users’ content without being expected to monitor everything on the whole site. The section has largely shaped the development of the Internet since, and as seen in this case with YouTube, it applies very differently with today’s more advanced algorithms. As Adam Liptak, a reporter for the New York Times who was present at the Gonzalez v. Google hearing, put it, “the internet may have outgrown section 230.”

The family’s lawyer, Eric Schnapper, argued that YouTube’s personalized recommendation system actively promoted terrorist content, overreaching the boundary of simply allowing users to publish content freely. Justice Clarence Thomas responded that this algorithm is neutral; it is simply a machine that promotes content across the board in the same way that Google and other search engines filter results based on previous knowledge of the user. 

In one example of the infinite potential benefits of a personalized algorithm, Lila Hampers (‘24), a dancer and choreographer for Performing Dance Group (PDG) at Bishop’s, uses Instagram Reels to find inspiration. “Instagram recommends dance videos to me based on my stylistic choices,” she explained. “It shows me tutorials for different lifts and floorwork, which is helpful because I don’t have time to learn these things in classes.” Without the algorithm, Lila laughed, she wouldn’t be able to survive choreographing for PDG. “If Instagram just put completely random things on my page that weren’t specific to what I’m looking for, I don’t know what I would do.”

Similarly, Sophia Gleeson (‘24) finds musical theatre inspiration on social media platforms like Instagram and TikTok. “Once, my Instagram restarted, and I was suddenly recommended general content about architecture and other topics that I don’t care that much about,” she recalled, expressing an annoyance with the inconvenience. Clearly, personalized recommendations can shape our lives in helpful ways.

As the hearing continued, after arguing that these algorithms served a harmful purpose, Schnapper then expanded his argument to claim that any online search platforms similar to Google that produce dangerous results should no longer have the protection of immunity under Section 230. The court responded hesitantly and showed a reluctance to alter the section on such a grand scale, although the official decision has yet to be released.

As the court’s hesitation represents, this instance, after all, is nothing new. In February 2022, Spotify experienced a similar controversy with mega-celebrity and commentator Joe Rogan, known for his podcast The Joe Rogan Experience. With his organic, conversational style of interviews and tendency to discuss topics that criticize higher establishments, Rogan found his platform under fire when he brought Dr. Robert Malone on his show. After publishing an episode in which Dr. Malone, despite having medical credentials, shared his extreme anti-vaccine beliefs and conspiracies, the public was outraged at Spotify for allowing and benefiting off of misinformation. Still, Spotify stood its ground, defending a neutral standpoint as a platform that publishes individual artists’ work regardless of bias.

In January 2021, on the other hand, we saw Twitter ban former President Donald Trump from using the platform due to his spreading of misinformation. Up until a certain point, Twitter officials had taken that same neutrality standpoint by the logic that they historically allowed a wide berth of world leaders to maintain their own accounts. Then, after his posts attempting to sabotage election results led to violent riots that overtook the Capitol, Twitter retracted his account. Here, Twitter drew the line when a post had directly harmful effects on people, taking accountability and no longer standing idly by. 

Between issues of vague, hard-to-target algorithm mechanisms and larger-scale issues involving hugely influential figures, one thing is for certain: the relationship between online platforms’ proclaimed neutrality and the political views that their users promote is under constant scrutiny. As far as the Gonzalez v. Google decision, Cathy Gellis, an independent attorney in the San Francisco Bay Area, predicted that the Court will not be convinced to change Section 230. “It appeared overall that there was not a huge appetite to upend the internet, especially on a case that I believe for them looked rather weak from a plaintiff’s point of view,” she told CNBC. Eric Goldman, a Santa Clara University School of Law professor, agreed, while still feeling concerned about the potential consequences if the Court does decide to change the section. “I remain petrified that the opinion is going to put all of us in an unexpected circumstance,” Goldman said. As this case brought to light, only time will tell if a decisive change will occur, or if the nuanced issue has the potential to cause even deeper division and consequences.