Extracting usability information from user interface events
ACM Computing Surveys (CSUR)
An Insight-Based Methodology for Evaluating Bioinformatics Visualizations
IEEE Transactions on Visualization and Computer Graphics
Toward Measuring Visualization Insight
IEEE Computer Graphics and Applications
Proceedings of the 2006 AVI workshop on BEyond time and errors: novel evaluation methods for information visualization
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Grounded evaluation of information visualizations
Proceedings of the 2008 Workshop on BEyond time and errors: novel evaLuation methods for Information Visualization
Data, Information, and Knowledge in Visualization
IEEE Computer Graphics and Applications
Defining Insight for Visual Analytics
IEEE Computer Graphics and Applications
Recovering Reasoning Processes from User Interactions
IEEE Computer Graphics and Applications
Why ask why?: considering motivation in visualization evaluation
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Hi-index | 0.00 |
Evaluation in visualization remains a difficult problem because of the unique constraints and opportunities inherent to visualization use. While many potentially useful methodologies have been proposed, there remain significant gaps in assessing the value of the open-ended exploration and complex task-solving that the visualization community holds up as an ideal. In this paper, we propose a methodology to quantitatively evaluate a visual analytics (VA) system based on measuring what is learned by its users as the users reapply the knowledge to a different problem or domain. The motivation for this methodology is based on the observation that the ultimate goal of a user of a VA system is to gain knowledge of and expertise with the dataset, task, or tool itself. We propose a framework for describing and measuring knowledge gain in the analytical process based on these three types of knowledge and discuss considerations for evaluating each. We propose that through careful design of tests that examine how well participants can reapply knowledge learned from using a VA system, the utility of the visualization can be more directly assessed.