Chad Perrin: SOB

31 July 2006

auto-management of online reputation systems

Filed under: Cognition,Geek — apotheon @ 12:14

I’m quite fond of the concept of a social valuation system — a reputation system — for site moderation. For instance, there’s the “Score” system on Slashdot, and the “Reputation” system on PerlMonks. These things induce members to rate other members’ postings, which contributes to both an at-a-glance ability to evaluate a given set of statements and a related overal individual reputation for people who regularly post there.

Such systems can be subject to abuse, however — and they are not idiot-proof.

An example of abuse:

  1. Someone posts about something that happens to touch on gun control.
  2. Someone else disagrees with the OP‘s stance on gun control, and downvotes the original post — then hunts down all posts by the OP on any subject and downvotes as many of them as possible, until running out of votes, just to negatively affect that person’s overall reputation and achieve some kind of petty vindictive “victory”.

An example of idiocy:

  1. Someone posts a lucid, logically valid, eloquent defense of a very unpopular position — perhaps he opposes socialized medicine in all forms on, say, a Canadian online community forum.
  2. Thirty other Canadians disagree vehemently because the news media outlets tell them to, and downvote the OP because it’s “obvious” that he’s “wrong”. Nobody bothers to make an effort to actually address the logic of the post.

A reasonably well-designed reputation system can provide a mostly-useful automatic social self-management situation, but such edge-case failures as the two above examples do occur. This is particularly true in that even communities mostly made up primarily of relatively reasonable people with a grasp of valid logic will develop over time some definite opinion trends, so that expression of certain opinions becomes a de facto taboo.

It would be nice to be able to design a system whose mechanism encourages better decision-making on the part of rep voters by its very nature, where downvotes do not happen simply because of disagreement, but for “good” reasons — so that the reputation voting system more effectively encourages thoughtful, well-reasoned posts, and discourages only trolls, flames, and other undesirable behavior (where “undesirable” is, essentially, behavior that involves being thoughtless, idiotic, et cetera, and not merely “wrong”). It’s something I’ve been thinking about for a long time, and if a better system than those I’ve already seen comes to mind I would like to implement it at some point. I need ideas, though. There isn’t much that comes to mind.


  1. That’s when you start getting stuff like metamodding and a more complex system for deciding who gets points, though. Vote like an idiot, and you lose your ability to vote. It’s not perfect, but it is an improvement.

    Comment by h3st — 31 July 2006 @ 02:38

  2. Metamodding is a possibility, of course, but it seems a little overly complex. I’m looking for simple solutions — or at least solutions that seem simple to the user. It could also potentially exacerbate the problem: if there’s a socially verboten political opinion (for instance), and someone gets positively modded for it by someone who recognizes the fact that he made a good argument even if the mod doesn’t agree with the position supported, that mod could get a bunch of negative metamoderation attention — which achieves the opposite of what I’m hoping to accomplish.

    Comment by apotheon — 31 July 2006 @ 02:48

  3. It seems to me that “reasonableness” lies a long way off yet in terms of AI. And humans.

    Comment by SterlingCamden — 31 July 2006 @ 03:15

  4. I don’t need AI, or even “reasonableness” in humans — I just want to simulate reasonableness via some technical encouragement of actions. In other words, I don’t care if the humans are reasonable, or the software, so long as the statistics that are generated by the reputation system looks an awful lot like the result of reasonableness on the part of the humans involved.

    Comment by apotheon — 31 July 2006 @ 03:54

  5. this is only going to work in some situations, but the surefire way to filter out broken mods is to configure your account to only “believe” the mods of those you know and trust in some way. and make it easy to find and indicate those individuals.

    Comment by sosiouxme — 31 July 2006 @ 05:25

  6. Hey, that’s a neat feature idea. I should use it. You’re right about the limited applicability, but it’s still a good idea.

    Comment by apotheon — 31 July 2006 @ 06:41

RSS feed for comments on this post.

Sorry, the comment form is closed at this time.

All original content Copyright Chad Perrin: Distributed under the terms of the Open Works License