Developing guidelines for assessing visual analytics environments

  • Authors:
  • Jean Scholtz

  • Affiliations:
  • Pacific Northwest National Laboratory, Richland, WA and Pacific Northwest National Laboratory, Richland, WA

  • Venue:
  • Information Visualization - Special issue on Evaluation for Information Visualization
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this article, we develop guidelines for evaluating visual analytics environments based on a synthesis of reviews for the entries to the 2009 Visual AnaLytics Science and Technology [VAST] Symposium Challenge and from a user study with professional intelligence analysts. By analyzing the 2009 VAST Challenge reviews, we gained a better understanding of what is important to our reviewers, both visual.ization researchers and professional analysts. We also report on a small user study with professional analysts to determine the important factors that they use in evaluating visual analysis systems. We also looked at guidelines developed by researchers in various domains and synthesized the results from these three efforts into an initial set for use by others in the community. One challenge for future visual analytics systems is to help in the generation of reports. In our user study, we also worked with analysts to understand the criteria they used to evaluate the quality of analytic reports. We propose that this knowledge will be useful as researchers look at systems to automate some of the report generation.1 From these two efforts, we produced some initial guidelines for evaluating visual analytics environments and for the evaluation of analytic reports. It is important to understand that these guidelines are initial drafts and are limited in scope as the visual analytics systems we evaluated were used in specific tasks. We propose these guidelines as a starting point for the Visual Analytics Community.