Content Moderation

May 11, 2024 • Louie Mantia

Hot take: We don’t need content moderation if we don’t have trending posts, universal search, and linked replies.

The necessity around content moderation is based on the extended reach provided by platforms. Even the companies that claim to care about decentralization—for some reason—still make space for trending content, universal search, and unrestricted replies. These features do not support decentralization. They support platforming.

Without these things, bad shit won’t often reach us. It’s only when this kind of content is surfaced. Cut that off, and it becomes more pleasant for everyone.

There’s a “discoverability” brainworm that plagues us all. Almost every one of us secretly wants to be famous and thinks we’re just one good repost away from it, so we cling to features that we think will improve our odds. The reality—of course—is that it won’t happen. We trade that hope of getting famous with the burden to moderate content.

While we won’t become famous for a hit post, the voices of people who antagonize others are consistently elevated because they were discovered by the very tools we thought would help us. With universal search, bad actors can find content to reply to. Their replies are effectively given a platform by merely being associated with whoever they replied to. (In the past, I’ve called this piggybacking.) Enraged people will quote or boost it, in an attempt to highlight or shame this person. Noble as it may seem to confront intolerance, it also gives them additional attention. Due to the increased engagement, the controversial content makes its way into trending sections for everyone to see and get mad about.

Inevitably, this leads to an uptick in conversation about how content moderation could be better, how it’s necessary, should be built-in from the start, how to implement it now, and who should subject themselves to the horrible task of actually doing it. And how we’re going to pay for it.

No one questions the premise of discoverability. Everyone has just assumed the default is that these features must exist to have a viable product. Do they, though? I think content moderation is never going to scale, because the problem knows no bounds. It doesn’t stop or end, much like automated spam or AI-generated slop. Harmful content on social media does not stop by hiding posts or banning people, because there are always more right behind it. You have to cut off the methods of which they gain extended reach.

We could absolve ourselves from this burden by giving up the false hope that any of those tools actually help us more than they hurt us. The biggest problem is replies. But no one seems to want to admit it. We can’t afford to make the same mistakes again. We have to assume we are smarter today than we were yesterday.

If you ❤ like this