Bias and Controversy in Evaluation Systems

  • Authors:
  • Hady W. Lauw;Ee-Peng Lim;Ke Wang

  • Affiliations:
  • Nanyang Technological University, Singapore;Nanyang Technological University, Singapore;Simon Fraser University, Burnaby

  • Venue:
  • IEEE Transactions on Knowledge and Data Engineering
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluation is prevalent in real-life. With the advent of Web 2.0, online evaluation has become an important feature in many applications that involve information (e.g., video, photo, audio) sharing, and social networking (e.g., blogging). In these evaluation settings, a set of reviewers assign scores to a set of objects. As part of evaluation analysis, we want to obtain fair reviews for all the given objects. However, the reality is that reviewers may deviate in their scores assigned to the same object, due to the potential "bias" of reviewers or "controversy" objects. The statistical approach of averaging deviations to determine bias and controversy assumes that all reviewers and objects should be given equal weight. In this paper, we look beyond this assumption and propose an approach based on the following observations: (1) evaluation is "subjective", as reviewers and objects have varying bias and controversy respectively, and (2) bias and controversy are mutually dependent. These observations underlie our proposed reinforcement-based model to determine bias and controversy simultaneously. Our approach also quantifies "evidence", which reveals the degree of confidence with which bias and controversy has been derived. This model is shown to be effective by experiments on real-life and synthetic datasets.