Is There Any Way to Clean Up Facebook and Twitter?

Frankhoermann/Sven Simon/DPA via ZUMA

Looking for news you can trust?Subscribe to our free newsletters.

Tyler Cowen proposes today that there is no form of internet speech moderation that will satisfy everyone:
I’d like to suggest a simple trilemma. When it comes to private platforms and speech regulation, you can choose two of three: scalability, effectiveness and consistency. You cannot have all three. Furthermore, this trilemma suggests that we — whether as users, citizens or indeed managers of the platforms themselves — won’t ever be happy with how speech is regulated on the internet.
Note that Cowen uses “effective” to mean “doesn’t require so much time that the platform company can’t do its core job anymore.” Back when blogs were new and comment moderation was the big issue we were all trying to resolve, I ran into the same trilemma:

If the blog was small, I could easily moderate comments and do it consistently.
If I was willing to spend lots of time on moderation, I could manage a large blog with consistent comment policies.
If I decided not to worry about consistency, I could manage a large blog without putting a lot of time into comment moderation.

I never came anywhere close to finding a solution to this, and I generally chose option #3. I’d do a bit of moderation here and there, and it would necessarily be pretty inconsistent. However, that left me enough time to actually write a blog even as my audience grew. There’s just no way to spend hours moderating comments and still have hours left over to write a blog of decent quality. The same trilemma affects huge social media platforms:

A system that’s big and effective (i.e., lightly moderated by the platform company) will inherently be inconsistent.
A system that’s big and consistent will inherently require huge resources from the platform company and therefore won’t be effective.
A system that that’s effective and consistent requires too much human intervention to ever become very big.

Most people don’t get this, and therefore expect too much from platform companies like Twitter and Facebook. These companies can use automation to do a lot of the job, but automation isn’t even close to perfect yet. So what do you do? If the automation is too tight, it will eliminate innocent comments and everyone will scream. If the automation is too loose, it will let lots of hate speech through and everyone will scream. If you ditch the automation and use humans, you’ll go bankrupt—and anyway, human moderation is far from perfect too.
I didn’t have an answer to this back when I was a lone blogger (these days I get help from MoJo moderators), and I don’t have one today. However, my own personal view is that we should think of internet moderation about the same as we think of real-life moderation. This leads me in the direction of (a) light moderation that lets people say whatever they want, even if it’s gruesome, and (b) giving users the tools to do their own moderation. I’m far more in favor of the latter than I am with Twitter or Facebook making centralized decisions about what to allow and what to ban.
This is not a perfect solution, but that’s because there are no perfect solutions. And there’s no question that different people benefit from different levels of moderation. It’s one thing for a white man like me to prefer light moderation, but quite another for a black woman who gets far more abuse. Nonetheless, I don’t really see a good solution other than giving us all more and more tools to set our own preferred moderation levels while we wait for automated systems to get better. That’s going to be a while.