Chapter 15. Good Cop, Bad Cop

In the past I engaged in the whack-a-mole game against abusers. I spent many man months on algorithmic approaches for fighting abuse. My efforts focused on analysis of the content, sophisticated filters, and user “rehab” automation. In the end, the abusers (as a collective) have won. While I had given up and have left to engage in more fulfilling endeavors, the abusers have gradually figured out ways to bypass filters, create multiple accounts, and keep flooding message boards and public forums with spam.

Anyone who has ever tried seriously to block email spam knows that these kind of arms’ races are very hard to win. We are very few, the abusers are many, and there always seems to be more of them. They have learned to share sophisticated tools (like breaking captcha) so effectively the pros and the least sophisticated script-kiddies are no longer two separate groups.

The good news is that there is a better way, but it requires a totally new and fresh approach.

In particular, I’ve become a true believer in community-moderation and collaborative filtering, after seeing the success of schemes like those of Y! Answers and Slashdot. What these successful sites are doing in essence is empowering their (good) users to take ownership of the abuse and quality-level goals. They create strong incentives for good users to put an effort into moderating the sites. Users who consistently exhibit good judgments and willingness to contribute, get “strength points” making ...

Get Designing Social Interfaces now with the O’Reilly learning platform.

O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.