Learning-based evaluation of visual analytic systems

  • Authors:
  • Remco Chang;Caroline Ziemkiewicz;Roman Pyzh;Joseph Kielman;William Ribarsky

  • Affiliations:
  • UNC Charlotte;UNC Charlotte;UNC Charlotte;UNC Charlotte;UNC Charlotte

  • Venue:
  • Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Evaluation in visualization remains a difficult problem because of the unique constraints and opportunities inherent to visualization use. While many potentially useful methodologies have been proposed, there remain significant gaps in assessing the value of the open-ended exploration and complex task-solving that the visualization community holds up as an ideal. In this paper, we propose a methodology to quantitatively evaluate a visual analytics (VA) system based on measuring what is learned by its users as the users reapply the knowledge to a different problem or domain. The motivation for this methodology is based on the observation that the ultimate goal of a user of a VA system is to gain knowledge of and expertise with the dataset, task, or tool itself. We propose a framework for describing and measuring knowledge gain in the analytical process based on these three types of knowledge and discuss considerations for evaluating each. We propose that through careful design of tests that examine how well participants can reapply knowledge learned from using a VA system, the utility of the visualization can be more directly assessed.