Integrating implicit structure visualization with authoring promotes ideation
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Hi-index | 0.00 |
Information visualization systems can be very complex and require evaluation efforts targeted at the component level, the system level, and the work environment level. Some components can be evaluated with metrics that can be observed or computed (e.g. speed, accuracy, scalability), while others require empirical user evaluation to determine their benefits while used by humans.Controlled experiments remain the workhorse of evaluation but there is a growing sense in the community that information visualization systems need new methods of evaluation, from longitudinal field studies, insight based evaluation and other metrics adapted to the perceptual aspects of visualization as well as the exploratory nature of discovery. While the overall growth of information visualization is accelerating, the growth of techniques for the evaluation of systems has been relatively slow. That is true for both usability studies and intrinsic quality metrics. Usability studies still tend to be addressed in an ad hoc manner, focusing on particular systems, addressing only time and errors issues, and failing to produce reusable and robust results. Intrinsic quality metrics are even more rare and immature while it is vital defining and assessing them.The aim of the workshop is to collect and discuss innovative ideas on infovis evaluation methods. That includes new ways of conducting user studies, definition and assessment of infovis effectiveness through the formal characterization of perceptual and cognitive tasks and insights, definition of quality criteria and metrics. Case study and survey papers are also part of the workshop since they present interesting general guidelines, practical advices, and lessons learned.