Everything You Need To Know About Section 230
A handy primer to prepare you for the Section 230 debate
Over the past year a cottage industry of sorts has spring up -- horribly bad, incorrect op-eds about the purpose of Section 230 and why it needs to be repealed. I don’t mean the op-eds get the technicalities of the law wrong, I mean the writers misinterpret the law entirely or create elaborate strawmen in order to make a case for repealing Section 230.
In the spirit of once again offering an explanation of what Section 230 is, why it exists, and what it does and does not do I give you this primer. I’ll also address most of the misconceptions about Section 230 at the end; that part will be a living document that will be added to as new misconceptions appear.
Bookmark it, share it with your friends, stick it on your refrigerator, print it out and staple it to light posts, or however you’d like to spread the word.
Let’s start with the text of Section 230 itself
(1)Treatment of publisher or speaker
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
(2)Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
(B)any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1)
Section 230, which was originally part of the Communications Decency Act of 1996 (CDA) and is the only part of that piece of legislation that was not deemed unconstitutional, was conceived of and written when the internet was starting to gain popularity with the general public. The internet had created an ability that did not exist before -- the ability for third-party users to post content directly to a platform without undergoing any editorial process.
I am going to take the liberty of quoting from one of my previous pieces on Section 230 to explain the case law prior to Section 230
“To understand the intent and purpose of Section 230, one needs to know its history and what preceded it. Prior to Section 230’s passage, two pieces of case law governed the legal precedents for online content moderation: Cubby Inc. v. CompuServe Inc. and Stratton Oakmont Inc. v. Prodigy Services Co. The upshot of these cases was a court ruling that CompuServe, which did not moderate content on its platform, could not be held legally liable for posts on it, while Prodigy, which did moderate content, could be held legally liable for anything posted there.”
To put it another way, the court treated Prodigy as a publisher who could be held legally liable for content it allowed, whereas Compuserve was treated as a platform and therefore could not be held legally liable for any content. The Prodigy case is particularly complicated, as it dealt with content that was not removed from the platform (and yes it was THAT Stratton Oakmont that sued Prodigy).
Enter Representative Chris Cox (R-CA) and Senator Ron Wyden (D-OR), the authors of Section 230. Both men felt the case law established in the CompuServe and Prodigy cases disincentivized platforms from engaging in content moderation. They felt it was especially unfair to Prodigy, as the platform moderated content in order to create a family-friendly internet service provider. Cox and Wyden saw that the internet was going to be a huge phenomenon and that steps needed to be taken to address future content moderation issues. Hence, the two men wrote Section 230 to correct what they felt was unfair case law.
Fun trivia fact - Section 230 was originally known as Section 509 in the CDA, after that act was ruled unconstitutional Section 509 was added to the Communications Act of 1934 as Section 230 of that act.
Now that you know the history and rationale behind Section 230, let’s tackle some of the misconceptions its critics use to argue for its repeal.
Q - But if a platform moderates and removes content, doesn’t that turn it into a publisher?
A - Nope! The text and purpose of Section 230 is specifically to clarify that a platform cannot be treated as a publisher of any content posted by a third party. The law also states that a platform cannot be held liable for removing any content it finds objectionable, even if that content is protected by the Constitution.
Here’s my explanation of the difference between a publisher and a platform; a publisher edits content before it is posted, a platform edits content after it has been posted.
Q - Section 230 is just a special legal carveout for Big Tech right? Lots of people have told me that, so it must be true?
A - Again, nope! Section 230 applies to Big Tech, Medium Tech, and Small Tech equally. Any platform that allows third-party content is protected, from Twitter to Amazon to Yelp to AirBnb to Substack to 8kun to those weird brony message boards. Section 230 grows in importance the smaller the platform is -- Twitter can afford to fight a bunch of nuisance suits, a small blog with an open comments section can’t.
Section 230 is vital for protecting smaller and new platforms, without the protections the law provides there is no way those platforms could compete with large, established competitors.
Q - Section 230 totally requires platforms to be neutral in their content moderation decisions correct? They can’t be biased against any one group of people?
A - There is nothing in Section 230 that requires a platform to maintain any level of neutrality, political or otherwise. Every platform has the right to moderate content as it sees fit, and even if Section 230 disappeared tomorrow the 1st Amendment would come into play here.
(side note - many of the arguments against Section 230 are actually arguments against the 1st Amendment)
Q - The Supreme Court ruled in Marsh v. Alabama that a private company cannot regulate what is said in the “public square” even if that square is company property so that means social media can’t regulate 1st Amendment-protected speech?
A - This is a new one that's popped up recently, the invocation of the 1946 Supreme Court ruling in Marsh v. Alabama as an argument against certain forms of content moderation. Problem is, the ruling in that case was superseded in 1972 by the Lloyd Corp v. Tanner case which ruled that private businesses have property rights that allow them to determine what forms of speech they allow on their grounds.
The better argument against social media being legally allowed to engage in content moderation would be the 2019 ruling in Manhattan Community Access Corp. v. Halleck, but that argument would have to deal with the Supreme Court’s ruling that private companies cannot violate the 1st Amendment as they are not state actors nor are they exercising “powers traditionally exclusive to the state”
Q - Doesn’t Section 230 make it impossible to sue anyone for anything said on the internet?
A - No, it makes it impossible for a platform to be sued for anything a third party posts to it. I’ll use the New York Times’ website as an example -- if the Times runs an article with false or defamatory content it can be sued over it, but they cannot be sued over a third party posting false or defamatory content in the comments section. Using the Times as an example also shows how some outlets can function as both a publisher and a platform, depending on who posted the content to the platform.
Q - Does Section 230 allow platforms to leave up any content it wishes?
A - No, there are certain kinds of content platforms have to remove by law. Any content that violates federal law (child pornography, DMCA violations, etc) must be removed by a platform once it has been informed of the existence of such content. The most recent example of a Section 230 exemption is FOSTA - SESTA, a law passed by Congress that strips Section 230 protection from any platform that knowingly assists, facilitates, or supports “sex trafficking”
Now that you are better prepared to challenge those who misrepresent Section 230 in order to push an agenda, go forth and do so! The push to repeal Section 230 has bipartisan support, and we Section 230 supporters need all the help we can get in pushing back against that trend.
“The law also states that a platform cannot be held liable for removing any content it finds objectionable, even if that content is protected by the Constitution.” Actually it says any content they *in good faith* find to be obscene, etc.
A lot of the big tech behavior that users find objectionable, in particular uneven enforcement of muddy rules around what is and isn’t allowed, which feels like censorship, could be addressed by giving some teeth to the “in good faith” requirement. We don’t need to touch 230 to curb some of the objectionable behavior.