Cross-Evaluation: A new model for information system evaluation

  • Authors:
  • Ying Sun;Paul B. Kantor

  • Affiliations:
  • 4 Huntington Street, Rutgers University, New Brunswick, NJ 08903;4 Huntington Street, Rutgers University, New Brunswick, NJ 08903

  • Venue:
  • Journal of the American Society for Information Science and Technology
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, we introduce a new information system evaluation method and report on its application to a collaborative information seeking system, AntWorld. The key innovation of the new method is to use precisely the same group of users who work with the system as judges, a system we call Cross-Evaluation. In the new method, we also propose to assess the system at the level of task completion. The obvious potential limitation of this method is that individuals may be inclined to think more highly of the materials that they themselves have found and are almost certain to think more highly of their own work product than they do of the products built by others. The keys to neutralizing this problem are careful design and a corresponding analytical model based on analysis of variance. We model the several measures of task completion with a linear model of five effects, describing the users who interact with the system, the system used to finish the task, the task itself, the behavior of individuals as judges, and the self-judgment bias. Our analytical method successfully isolates the effect of each variable. This approach provides a successful model to make concrete the “three-realities” paradigm, which calls for “real tasks,” “real users,” and “real systems.” © 2006 Wiley Periodicals, Inc.