Measuring the quality of web content using factual information

  • Authors:
  • Elisabeth Lex;Michael Voelske;Marcelo Errecalde;Edgardo Ferretti;Leticia Cagnina;Christopher Horn;Benno Stein;Michael Granitzer

  • Affiliations:
  • Know-Center GmbH;Bauhaus-Universität Weimar;Universidad Nacional de San Luis;Universidad Nacional de San Luis;Universidad Nacional de San Luis;Graz University of Technology;Bauhaus-Universität Weimar;University of Passau

  • Venue:
  • Proceedings of the 2nd Joint WICOW/AIRWeb Workshop on Web Quality
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nowadays, many decisions are based on information found in the Web. For the most part, the disseminating sources are not certified, and hence an assessment of the quality and credibility of Web content became more important than ever. With factual density we present a simple statistical quality measure that is based on facts extracted from Web content using Open Information Extraction. In a first case study, we use this measure to identify featured/good articles in Wikipedia. We compare the factual density measure with word count, a measure that has successfully been applied to this task in the past. Our evaluation corroborates the good performance of word count in Wikipedia since featured/good articles are often longer than non-featured. However, for articles of similar lengths the word count measure fails while factual density can separate between them with an F-measure of 90.4%. We also investigate the use of relational features for categorizing Wikipedia articles into featured/good versus non-featured ones. If articles have similar lengths, we achieve an F-measure of 86.7% and 84% otherwise.