Mending Section 230

Created by
US-JUSTICE-INTERNET-MEDIA
People wait in line outside the US Supreme Court in Washington, DC, on February 21, 2023 to hear oral arguments in two cases that test Section 230, the law that provides tech companies a legal shield over what their users post online. Jim Watson/Getty Images

Section 230 of the Communications Decency Act, passed as part of the Telecommunications Act of 1996, has been a centerpiece policy supporting internet growth since its passage. §230 provides expansive immunity from civil liability to any "provider of interactive computer services" for the speech of its users which is hosted on its website or platform. Most essentially, the law provides a safe harbor: interactive computer services are not liable for takedown of user-generated content ("UGC"), whether or not that material is constitutionally protected. 47 U.S.C. § 230(c)(2). But the more controversial form of immunity comes in the blanket provision: "no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." 47 U.S.C. §230(c)(1). In the regulation of speech-related violations of law, publisher liability is reserved for those that print or repeat defamatory statements, for example. A newspaper that prints an opinion––even the newspaper stand that carries the paper––can be held liable for the contents of the speech, if the latter reasonably knew of its illegal character. But on the internet, there exists the rationale that the internet "flourishes with a minimum of government regulation" and would fail any other way. 47 U.S.C. §230(a)(4). Supporters of the law have argued that it would be impracticable and crippling to impose publisher liability on internet platforms. But critics call for the accountability that grew out of the print media system. Efforts to fully repeal the statute surfaced recently in 2020, and 2021, and while congressional leaders would certainly like to create a third wind to this movement, fans of §230 and laissez faire free internet possess an incredible power to oppose repeal.

Instead of waiting, the Brooklyn Law Incubator & Policy Clinic (BLIP), has redrafted §230 in 2025, aiming to strike a balance between maintaining the safe harbor, a legal function which is foundational to the internet, and the unintentional harms its loopholes create. As some of the earliest cases surrounding §230 immunity suggested, moderation of content is disincentivized without the promise of enforcement of liability. Stratton Oakmont, Inc. v. Prodigy Services Co., 23 Media L. Rep. 1794(N.Y. Sup. Ct. 1995). §230(c)(2) is an essential aspect of the law, without which any moderation of user-generated content on the platforms would be impossible, running into the robust law governing suppression of speech under the First Amendment.

But §230(c)(1) is the source of negative externality. The statute operates under a presupposition of platform moderation, that platforms want to present moderated digital spaces in good faith. We wanted to believe that. It also was set forth from the perspective of 1996, where moderation costs were much higher, and the need to foster internet growth was much greater. Of course we did not want to stifle this beautiful new world.

Today, the internet is very different, its wilderness largely domesticated, divvied up and controlled. As the number of internet spaces proliferated, the more each space offered a curated, sub-internet experience, all leading to the modern consumer perception that the web operator is speaking when one visits its site just as much, if not more, than any individual user-contributor. But not all web platforms look like Facebook and X. Some look more or less the same as in generations. Having grown up in a freer internet, I was enriched by the online world that was available in the early aught's. Over 38% of internet pages that existed in 2013 have been deleted.1 Most notably, have been the emergence of expansive and dominant web platforms, namely social media platforms, which generate enormous levels of traffic and user-generated content like never before.

§230 was not built with these titans in mind, not only with respect to their size, but also the ways in which they engage with user speech on their platforms. Major social media operators filter, promote, demote, and use algorithms to sort and dictate what users see and do not see. Their business models employ addictive feeds, recommending, sorting, and prioritizing a flow of content designed to maintain user attention. And the most sensational content has thrived in the moderation environment that has existed to date. Until 2024, these platforms placated government by posturing shrewd and controlled moderation of user content. See In Re Cambodian Prime Minister (Meta Review Board 2023).But this was largely performative, and even where deployed to regulate platform environments, self-regulation has failed to prevent violence, disinformation, even foreign threats to undermine election integrity. 2 Former moderators at Meta have attested how few posts––designated as violating Meta's terms of use––they were able to have removed. 3 Often this content is highly sensational and useful to the algorithm, so platforms are ill-positioned in their incentives to act as their own moderators without oversight, as the law allows.

But Facebook moderators are largely gone now. The posturing of platforms to appease growing government discontent gave way to a change in administration, characterized by a deregulatory and collaborative approach to big tech. With the threat of revision of §230 immunity seemingly defeated, and with the confidence to enact a sweeping change in public relations, major tech platforms who support the law have changed their posture. Channeling momentum from the digital de-censorship movement on the political right, big tech has rebranded moderation as censorship, and has halted all performative moderation in the Trump era in the name of internet freedom. This realigns with the interests of the social media platforms, who capitalize on content and are fundamentally conflicted in moderating their spaces.

But acknowledging the need for change does not require denouncing the past entirely. The case law surrounding §230 carved out certain grounds for divestiture of safe harbor protections, so BLIP began by codifying settled common law, creating a formal carveout from immunity where one is needed. Over time, the grounds for divesting §230(c)(1) immunity have been given names, if not clarified. If an interactive computer service provider "creates or develops" the illegal speech, it can be held liable as a publisher. Jones v. Dirtyworld Entertainment Recordings LLC, 755 F.3d 398 (6th Cir. 2014). "Developing" has been defined as materially contributing to the alleged illegality. Fair Housing Comm. of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008).And so, our revision of the statute began by codifying this case law. Under BLIP's §230(c)(1), "creating, developing, or materially contributing to the alleged legality" are some of the express grounds for a web platform's loss of §230 rights. The questions became how to decide what class of internet operators get §230 safe harbor protection and how to identify and separate the class that has abused and should be divested of immunity.

The Stop Addictive Feeds Exploitation Act ("NY SAFE Act"), tackled this issue in a 2023 New York State statute, which the BLIP Clinic helped oto author in years past. The BLIP Amendment to §230 drew inspiration from language in NY SAFE describing addictive algorithmic technologies that back social media feeds, definitions which meaningfully reflect the attention epidemic in the letter of the law. From them, we created more grounds for the carveout from publisher immunity, essentially calling for wholesale social media platform exclusion from immunity under §230(c)(1). Under the BLIP Amendment, If a web operator employs algorithmic sorting, filtering, demotion, promotion of user speech, it loses immunity and faces publisher liability for promoted user speech. With the common law carveouts for material development or contribution to speech as bars to §230 immunity codified in the statute, platforms will not have unchecked power to speak through closed lips. User demand for online speech outlets will continue to favor platforms that provide a minimum of regulation, And if platforms over-moderate in the wake of this amendment's passage, users will be driven to truly neutral online speech fora, which act more like true town squares––free from monetization and amplification of societal harms to the most vulnerable.

1 Peter at Spiceworks, "Did you know huge chunks of the internet are disappearing?" (2024). https://community.spiceworks.com/t/did-you-know-huge-chunks-of-the-internet-are-dissapearing/1109100.

2 Confessore, Nicholas. "Cambridge Analytica and Facebook: The Scandal and the Fallout" (2018). https://www.nytimes.com/2018/04/04/us/politics/cambridge-analytica-scandal-fallout.html

3 Criddle, Cristina. "Facebook moderator: every day was a nightmare" (2021). https://www.bbc.com/news/technology-57088382

© 2025 Lawyer Herald All rights reserved. Do not reproduce without permission.

Join the Discussion
More Law & Society
Trump Navy Guard Clip_06092025_1

Trump Insists He 'Can't Call in the National Guard Unless Requested' in Resurfaced Clip After Deploying Troops for LA Protests

National Guard Condemnation_06092025_1

Ex-National Guard Official Condemns Trump's 'Inappropriate' Deployment for LA Protests: 'Bad for All Americans Concerned About States Rights'

LAPD Fed Clash_06092025_1

LAPD Condemns Feds for Creating 'Hazardous Conditions,' Limiting Officers' Responses to Calls

Mass Teen_01292025_1

ICE Detained US Marshal Who Matched the 'General Description' of Person of Interest Inside Arizona Immigration Court

OSZAR »