The visual display of quantitative information
The visual display of quantitative information
The Xerox Star: A Retrospective
Computer
Cognitive dimensions of notations
Proceedings of the fifth conference of the British Computer Society, Human-Computer Interaction Specialist Group on People and computers V
Task-analytic approach to the automated design of graphic presentations
ACM Transactions on Graphics (TOG)
Multimodal human-computer interface
Fundamentals of speech synthesis and speech recognition
Beyond Fitts' law: models for trajectory-based HCI tasks
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Visual task characterization for automated visual discourse synthesis
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The grammar of graphics
Is paper safer? The role of paper flight strips in air traffic control
ACM Transactions on Computer-Human Interaction (TOCHI) - Special issue on interface design for safety-critical interactive systems: when there is no room for user error
Comparing interfaces based on what users watch and do
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
The keystroke-level model for user performance time with interactive systems
Communications of the ACM
Toward automated exploration of interactive systems
Proceedings of the 7th international conference on Intelligent user interfaces
SpiraClock: a continuous and non-intrusive display for upcoming events
CHI '02 Extended Abstracts on Human Factors in Computing Systems
A problem-oriented classification of visualization techniques
VIS '90 Proceedings of the 1st conference on Visualization '90
BEST PAPER: A Knowledge Task-Based Framework for Design and Evaluation of Information Visualizations
INFOVIS '04 Proceedings of the IEEE Symposium on Information Visualization
Low-Level Components of Analytic Activity in Information Visualization
INFOVIS '05 Proceedings of the Proceedings of the 2005 IEEE Symposium on Information Visualization
Do we need eye trackers to tell where people look?
CHI '06 Extended Abstracts on Human Factors in Computing Systems
ACM Transactions on Applied Perception (TAP)
A predictive model of menu performance
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Animated Transitions in Statistical Data Graphics
IEEE Transactions on Visualization and Computer Graphics
A cognitive model for understanding graphical perception
Human-Computer Interaction
ACT-R: a theory of higher level cognition and its relation to visual attention
Human-Computer Interaction
Automated eye-movement protocol analysis
Human-Computer Interaction
A Nested Model for Visualization Design and Validation
IEEE Transactions on Visualization and Computer Graphics
Improving users' comprehension of changes with animation and sound: an empirical assessment
INTERACT'07 Proceedings of the 11th IFIP TC 13 international conference on Human-computer interaction
Visual Thinking: for Design
23rd French Speaking Conference on Human-Computer Interaction
Augmenting the scope of interactions with implicit and explicit graphical structures
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The four-level nested model revisited: blocks and guidelines
Proceedings of the 2012 BELIV Workshop: Beyond Time and Errors - Novel Evaluation Methods for Visualization
Existe-t-il une différence entre langages visuels et textuels en termes de perception?
Proceedings of the 25ième conférence francophone on l'Interaction Homme-Machine
Hi-index | 0.00 |
When designing a representation, the designer implicitly formulates a sequence of visual tasks required to understand and use the representation effectively. This paper aims at making the sequence of visual tasks explicit in order to help designers elicit their design choices. In particular, we present a set of concepts to systematically analyse what a user must theoretically do to decipher representations. The analysis consists of a decomposition of the activity of scanning into elementary visualization operations. We show how the analysis applies to various existing representations, and how expected benefits can be expressed in terms of elementary operations. The set of elementary operations form the basis of a shared language for representation designers. The decomposition highlights the challenges encountered by a user when deciphering a representation and helps designers to exhibit possible flaws in their design, justify their choices, and compare designs. We also show that interaction with a representation can be considered as facilitation to perform the elementary operations.