https://web.archive.org

Can The Advogato Trust Metric Save Kuro5hin? || kuro5hin.org

Note: as I ready this for voting, Advogato's server has been down for most of the day. Just in case it doesn't come back up, I'll give you links to Archive.org's cached copies of Advogato's pages: The Advogato trust metric and Raph Levien's Advogato homepage.

I'm afraid Advogato is a much lower-budget site than Kuro5hin, and often suffers downtime.

This PDF document is a draft of his thesis. I don't think the trust metric actually in use at Advogato is as elaborate as the systems Raph has been studying for his dissertation work.

When rusty upgraded Kuro5hin's servers, he announced his intention to upgrade it's Scoop code as well. The Scoop Open Source Project is actually quite far advanced beyond K5's primitive codebase, with many features K5 users have requested already implemented.

If he hasn't done the upgrade yet, and trust metrics could be coded into Scoop first, then he could bring trust metrics to Kuro5hin when he finally did upgrade. Even if he has, I'm sure rusty would feel a second upgrade to be worth his while, considering all the work it would save him and the other editors.

While I have long advocated trust metrics as a solution to Kuro5hin's broken moderation system, I'm sorry to have to say I'm not up to the task of doing the work myself. I don't know Perl; more importanly, I don't know Scoop. While I'm sure I could learn them, Scoop has a complex codebase. To do a Scoop trust metric for my very first real Perl project would likely take a long time, and not produce quality results. Advogato's trust metric is calculed by an Apache module called mod_virgule that is written in C, so I expect I could help with that.

We would need to apply trust metrics to story moderation somehow. My suggestion is to give the vote only to users whose trust rating is high enough to be reasonably certain they aren't dupes. One couldn't have absolute certainty without forbidding too many users from voting. It would be sufficient if the rating were set so that only a modest number of dupes could slip through.

Advogato itself is a community website for Free Software and Open Source programmers, created by Levien as a testbed for his theories. A demonstration of the trust metric's effectiveness is that no approval of any kind is required to publish an article on Advogato's front page, yet it remains largely free of spam and trolls. This works because one may only publish stories after attaining the "Journeyer" trust rating; the other ratings are Apprentice, for new users, and Master.

How It Works

The trust metric can be modeled as a graph, with the nodes being user accounts and each edge being the certification of one account for another. To simplify the explanation, assume that there is only one level of certification; one trusts another user, or does not rate him at all. (Advogato itself has three levels of trust, plus no rating.)

Our objective is to divide the nodes into valid ones and bogus ones. For Kuro5hin's purposes, only the valid nodes would be allowed to rate comments or vote on stories.

It can be assumed that many of the bogus nodes will trust each other, but if few valid nodes trust bogus ones, little of the trust will flow into them along the graph. The seed nodes are always trusted, with the flow of trust proceeding along the certification edges to the rest of the graph.

For the trust calculation to be accurate, the graph must be richly interlinked, that is, each user should certify as many other users as he can reasonably trust. New users face a problem, in that they won't be trusted when they join, but if they are allowed to post at least comments, they can earn certifications.

The trust metric algorithm gracefully handles the case of an army of dupes attacking the graph: bogus nodes do not raise the trust level of any nodes that they certify. Some bogus nodes can become trusted, but only in linear proportion to the number of valid nodes that inappropriately certify bogus nodes.

How is This Better Than Slashdot's Karma?

The key difference is that one rates users, rather than their comments. One's rating can be adjusted or withdrawn at any time, should the user display an increased or decreased level of trustworthiness.

It's not at all hard to game Slashdot's moderation system by karma whoring. I used to do it all the time: just Google for a link relevant to the discussion and post it in a comment. Moderators who may not be critical thinkers, or following one's history over some period of time will be inclined to promote the one comment.

But suppose someone tried to whore karma under the trust metric. While they would be promoted at first for posting informative links, when they tried to use their new status to their trollish advantage, their trust rating would be quickly knocked down by other users. This decrease in rank wouldn't apply to just their newly abusive comments, but every comment they ever have or ever will post, and to their ability to vote for stories in the queue.

How Might It Work For Kuro5hin?

Advogato's sober discussions of software development are a model of decorum compared to Kuro5hin's rowdy horseplay. How could the trust metric be applied to our unique needs?

Consider the problems we need to solve: votes can be unfairly cast, positive or negative, for stories, and comments can be unfairly rated, either to hide legitimate ones or unhide offensive trolls.

Being trusted at Advogato allows one to post front-page stories without approval. There is no moderation of comments there. I think we would still need to subject new stories to voting, and to allow members to hide and unhide comments. If we only allowed voting and comment moderation to users whose trust had been sufficiently certified, unfair votes and inappropriate comment moderations would be rare.

It's not clear whether we need the multiple levels of trust as used by Advogato. Perhaps it would suffice for one to simply be trusted or not. Alternatively, we could have several levels of trust, with the more-highly trusted users' votes and comment mods given greater weight.

The challenge faced by new users who have no certifications yet could be handled through this multi-level approach. Perhaps the trust levels could be Troll, New User, and Experienced User. Trolls wouldn't be allowed to vote or moderate. New Users would be allowed a single vote without needing to be certified. Experienced Users would be given two or more votes.

In this system, one's trust rating would be regularly re-calculated, and expected to change from time to time as one's reputation in the community improves, or doesn't.

Where Do We Go From Here?

I intend this proposal to be a starting point for discussion of how we could moderate in a better way. Any real implementation would need to be subjected to more rigorous analysis than I'm yet able to do. Avenues of attack in our particular implementation would have to be considered, as well as issues of fairness.

A better moderation system is important not just to ensure that users are treated fairly, but also to lighten the workload of Kuro5hin's editors. The more us members are able to fairly moderate each other, the better Kuro5hin will scale to an increasing number of users.

That would enable Kuro5hin's volunteer staff to turn their attention to tasks of more long-lasting value to the site, such as regular maintenance of the Scoop code.