Preattentive processing in vision
Computer Vision, Graphics, and Image Processing
CHI '86 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
The cognitive coprocessor architecture for interactive user interfaces
UIST '89 Proceedings of the 2nd annual ACM SIGGRAPH symposium on User interface software and technology
Stretching the rubber sheet: a metaphor for viewing large layouts on small screens
UIST '93 Proceedings of the 6th annual ACM symposium on User interface software and technology
UIST '93 Proceedings of the 6th annual ACM symposium on User interface software and technology
A review and taxonomy of distortion-oriented presentation techniques
ACM Transactions on Computer-Human Interaction (TOCHI)
CHI '94 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Communications of the ACM
3-dimensional pliable surfaces: for the effective presentation of visual information
Proceedings of the 8th annual ACM symposium on User interface and software technology
A focus+context technique based on hyperbolic geometry for visualizing large hierarchies
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Navigating hierarchically clustered networks through fisheye and full-zoom methods
ACM Transactions on Computer-Human Interaction (TOCHI)
Information visualization: perception for design
Information visualization: perception for design
An initial examination of ease of use for 2D and 3D information visualizations of Web content
International Journal of Human-Computer Studies - Empirical evaluation of information visualizations
A framework for unifying presentation space
Proceedings of the 14th annual ACM symposium on User interface software and technology
Improving focus targeting in interactive fisheye views
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Navigation patterns and usability of zoomable user interfaces with and without an overview
ACM Transactions on Computer-Human Interaction (TOCHI)
Fisheyes are good for large steering tasks
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Techniques for non-linear magnification transformations
INFOVIS '96 Proceedings of the 1996 IEEE Symposium on Information Visualization (INFOVIS '96)
H3: laying out large directed graphs in 3D hyperbolic space
INFOVIS '97 Proceedings of the 1997 IEEE Symposium on Information Visualization (InfoVis '97)
Nonlinear Magnification Fields
INFOVIS '97 Proceedings of the 1997 IEEE Symposium on Information Visualization (InfoVis '97)
TreeJuxtaposer: scalable tree comparison using Focus+Context with guaranteed visibility
ACM SIGGRAPH 2003 Papers
DateLens: A fisheye calendar interface for PDAs
ACM Transactions on Computer-Human Interaction (TOCHI)
GI '04 Proceedings of the 2004 Graphics Interface Conference
A comparison of fisheye lenses for interactive layout tasks
GI '04 Proceedings of the 2004 Graphics Interface Conference
Effects of 2D geometric transformations on visual memory
APGV '06 Proceedings of the 3rd symposium on Applied perception in graphics and visualization
Hi-index | 0.00 |
Focus+Context techniques are commonly used in visualization systems to simultaneously provide both the details and the context of a particular dataset. This paper proposes a new methodology to empirically investigate the effect of various Focus+Context transformations on human perception. This methodology is based on the shaker paradigm, which tests performance for a visual task on an image that is rapidly alternated with a transformed version of itself. An important aspect of this technique is that it can determine two different kinds of perceptual cost: (i) the effect on the perception of a static transformed image, and (ii) the effect of the dynamics of the transformation itself. This technique has been successfully applied to determine the extent to which human perception is invariant to scaling and rotation [Rensink 2004]. In this paper, we extend this approach to examine nonlinear fisheye transformations of the type typically used in a Focus+Context system. We show that there exists a no-cost zone where performance is unaffected by an abrupt, noticeable fisheye transformation, and that its extent can be determined. The lack of perceptual cost in regards to these sudden changes contradicts the belief that they are necessarily detrimental to performance, and suggests that smoothly animated transformations between visual states are not always necessary. We show that this technique also can map out low-cost zones where transformations result in only a slight degradation of performance. Finally, we show that rectangular grids have no positive effect on performance, acting only as a form of visual clutter. These results therefore demonstrate that the perceptual costs of nonlinear transformations can be successfully quantified. Interestingly, they show that some kinds of sudden transformation can be experienced with minimal or no perceptual cost. This contradicts the belief that sudden changes are necessarily detrimental to performance, and suggests that smoothly animated transformations between visual states are not always necessary.