5 Comments

“The law also states that a platform cannot be held liable for removing any content it finds objectionable, even if that content is protected by the Constitution.” Actually it says any content they *in good faith* find to be obscene, etc.

A lot of the big tech behavior that users find objectionable, in particular uneven enforcement of muddy rules around what is and isn’t allowed, which feels like censorship, could be addressed by giving some teeth to the “in good faith” requirement. We don’t need to touch 230 to curb some of the objectionable behavior.

Expand full comment
author

Who defines "objectionable content", "good faith", and how would that be applied across all the platforms covered by Section 230?

Expand full comment

The common law would. It would only take a couple of court cases for the courts to give some shape to "good faith," based in part on what good faith means in other areas of the law, and part of that would likely involve having a consistent view of "objectionable content". So if a platform removes, say, an ad using a classical painting with nudity but allows other ad that uses similar artwork to remain, that divergent treatment is evidence of a lack of good faith removal. Furthermore, because it is done under the 230 umbrella, removal constitutes a statement by the platform that the content is obscene, lewd, or incitement to violence and could give rise to a defamation claim against the platform.

It wouldn't take many suits before platforms started being more transparent about their posting rules and consistent in enforcing them—which are the problems driving the ditch 230 efforts. We don't need to touch 230 or pass any other legislation. Tort common law gives us the tools needed to fix these problems with big tech.

I've hoped that there are some test cases along these lines already in the works, but I've not looked for them yet.

Expand full comment
author

The point of Section 230 is to allow platforms to moderate content as they see fit without being sued; objectionable content is purposely vague to allow platforms to decide for themselves what that content is. In order to do what you're proposing Section 230 would have to be gutted.

Laura Loomer's case was a test case and even her lawyers had to admit Twitter was within its right to ban her

Expand full comment

Loomer's case is a bad test case, in part because the current cases claim that the platforms have no right to ban what they find objectionable. That's a straight looser under 230. No argument. My suggestion is slightly different. Don't aim for the take down. Aim for the why of the takedown, the objectionableness of it. If they weren't reasonable about that, if they suggested to the public that you, the person they took down, were one or more of the bad 230 list of things but you aren't and they should've or could've known it (depending on your public characterization under Sullivan, and note how precise this test case needs to be), they've defamed you. Papers, people, whoever, do have a right to say what they think about others but the common law doesn't permit them to get away with creating a false impression about people. Similarly, 230 doesn't give platforms the right to do anything they want, without regard for these long established rules of reasonable public behavior. A couple of well chosen and precisely argued defamation cases could do the check work we need here, without having to do any legislative thing to 230.

Why does 230 need a check? Because whether we call the big social media platforms common carriers or not, they function a lot like common carriers. I'm with Epstein on this point. Just as there was a public policy interest in making sure the railroads carried everyone with the fare and, say, didn't bump chicken farmers' livestock because of pressure from cattle ranchers, there is a public policy interest in ensuring that private interests don't get to dictate our public discourse.

Expand full comment