Measuring Information Understanding in Large Document Collections

  • Authors:
  • Malcolm Slaney;Daniel M. Russell

  • Affiliations:
  • IBM Almaden Research Center;IBM Almaden Research Center

  • Venue:
  • HICSS '05 Proceedings of the Proceedings of the 38th Annual Hawaii International Conference on System Sciences (HICSS'05) - Track 4 - Volume 04
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present a method for testing subject's performance in a realistic (end-to-end) information understanding task-rapid understanding of large document collections-and discuss lessons learned from our attempts to measure representative information-understanding tools and behaviors. To further our understanding of this task, we need to move beyond overly constrained and artificial measurements of easily instrumented behavior. From observations, we know information analysis is often performed under time pressure and requiring use of large document collections. Instrumenting people in their workplace is often untenable, yet oversimple laboratory studies often miss explanatory richness. We argue that studies of information analysts need to be done on tests that are closely aligned with their natural tasks. Understanding human performance in such tasks requires analysis that accounts for many of the subtle factors that influence final performance, including the role of background knowledge, variations in reading speed, and tool use costs.