Pad++: a zooming graphical interface for exploring alternate interface physics
UIST '94 Proceedings of the 7th annual ACM symposium on User interface software and technology
Navigating hierarchically clustered networks through fisheye and full-zoom methods
ACM Transactions on Computer-Human Interaction (TOCHI)
Elastic Windows: evaluation of multi-window operations
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
Does zooming improve image browsing?
Proceedings of the fourth ACM conference on Digital libraries
Guidelines for using multiple views in information visualization
AVI '00 Proceedings of the working conference on Advanced visual interfaces
AVI '00 Proceedings of the working conference on Advanced visual interfaces
Multiple zooming in geographic maps
Data & Knowledge Engineering
Advanced visual interfaces: the focus is on the user
The Knowledge Engineering Review
Zooming versus multiple window interfaces: Cognitive costs of visual comparisons
ACM Transactions on Computer-Human Interaction (TOCHI)
A review of overview+detail, zooming, and focus+context interfaces
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
Zooming and multiple windows are two techniques designed to address the focus-in-context problem. We present a theoretical model of performance that models the relative benefits of these techniques when used by humans for completing a task involving comparisons between widely separated groups of objects. The crux of the model is its cognitive component: the strength of multiple windows comes in the way they aid visual working memory. The task to which we apply our model is multiscale comparison, in which a user begins with a known visual pattern and searches for an identical or similar pattern among distracters. The model predicts that zooming should be better for navigating between a few distant locations when demands on visual memory are low, but that multiple windows are more efficient when demands on visual memory are higher, or there are several distant locations that must be investigated. To evaluate our model we conducted an experiment in which users performed a multiscale comparison task using both zooming and multiple-window interfaces. The results confirm the general predictions of our model.