Toward Measuring Visualization Insight
IEEE Computer Graphics and Applications
Usability evaluation considered harmful (some of the time)
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Nested Model for Visualization Design and Validation
IEEE Transactions on Visualization and Computer Graphics
Learning-based evaluation of visual analytic systems
Proceedings of the 3rd BELIV'10 Workshop: BEyond time and errors: novel evaLuation methods for Information Visualization
Rethinking statistical analysis methods for CHI
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The four-level nested model revisited: blocks and guidelines
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Hi-index | 0.00 |
My position is that improving evaluation for visualization requires more than developing more sophisticated evaluation methods. It also requires improving the efficacy of evaluations, which involves issues such as how evaluations are applied, reported, and assessed. Considering the motivations for evaluation in visualization offers a way to explore these issues, but it requires us to develop a vocabulary for discussion. This paper proposes some initial terminology for discussing the motivations of evaluation. Specifically, the scales of actionability and persuasiveness can provide a framework for understanding the motivations of evaluation, and how these relate to the interests of various stakeholders in visualizations. It can help keep issues such as audience, reporting and assessment in focus as evaluation expands to new methods.