Human assessments of document similarity

  • Authors:
  • S. J. Westerman;T. Cribbin;J. Collins

  • Affiliations:
  • Institute of Psychological Sciences, University of Leeds, LS2 9JT UK;Department of Information Systems and Computing, Brunel University, United Kingdom;Department of Natural and Social Sciences, University of Gloucestershire, United Kingdom

  • Venue:
  • Journal of the American Society for Information Science and Technology
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Two studies are reported that examined the reliability of human assessments of document similarity and the association between human ratings and the results of n-gram automatic text analysis (ATA). Human interassessor reliability (IAR) was moderate to poor. However, correlations between average human ratings and n-gram solutions were strong. The average correlation between ATA and individual human solutions was greater than IAR. N-gram length influenced the strength of association, but optimum string length depended on the nature of the text (technical vs. nontechnical). We conclude that the methodology applied in previous studies may have led to overoptimistic views on human reliability, but that an optimal n-gram solution can provide a good approximation of the average human assessment of document similarity, a result that has important implications for future development of document visualization systems. © 2010 Wiley Periodicals, Inc.