Extracting trust from domain analysis: a case study on the wikipedia project

  • Authors:
  • Pierpaolo Dondio;Stephen Barrett;Stefan Weber;Jean Marc Seigneur

  • Affiliations:
  • School of Computer Science and Statistics, Distributed System Group, Trinity College Dublin, Dublin;School of Computer Science and Statistics, Distributed System Group, Trinity College Dublin, Dublin;School of Computer Science and Statistics, Distributed System Group, Trinity College Dublin, Dublin;University of Geneva, CUI, Geneva 4, Switzerland

  • Venue:
  • ATC'06 Proceedings of the Third international conference on Autonomic and Trusted Computing
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

The problem of identifying trustworthy information on the World Wide Web is becoming increasingly acute as new tools such as wikis and blogs simplify and democratize publications. Wikipedia is the most extraordinary example of this phenomenon and, although a few mechanisms have been put in place to improve contributions quality, trust in Wikipedia content quality has been seriously questioned. We thought that a deeper understanding of what in general defines high-standard and expertise in domains related to Wikipedia – i.e. content quality in a collaborative environment – mapped onto Wikipedia elements would lead to a complete set of mechanisms to sustain trust in Wikipedia context. Our evaluation, conducted on about 8,000 articles representing 65% of the overall Wikipedia editing activity, shows that the new trust evidence that we extracted from Wikipedia allows us to transparently and automatically compute trust values to isolate articles of great or low quality.