OntoTrix: a hybrid visualization for populated ontologies
Proceedings of the 20th international conference companion on World wide web
JellyLens: content-aware adaptive lenses
Proceedings of the 25th annual ACM symposium on User interface software and technology
One-man-band: a touch screen interface for producing live multi-camera sports broadcasts
Proceedings of the 21st ACM international conference on Multimedia
Drilling into complex 3D models with gimlenses
Proceedings of the 19th ACM Symposium on Virtual Reality Software and Technology
Hi-index | 0.00 |
Focus+context interaction techniques based on the metaphor of lenses are used to navigate and interact with objects in large information spaces. They provide in-place magnification of a region of the display without requiring users to zoom into the representation and consequently lose context. In order to avoid occlusion of its immediate surroundings, the magnified region is often integrated in the context using smooth transitions based on spatial distortion. Such lenses have been developed for various types of representations using techniques often tightly coupled with the underlying graphics framework. We describe a representation-independent solution that can be implemented with minimal effort in different graphics frameworks, ranging from 3D graphics to rich multiscale 2D graphics combining text, bitmaps, and vector graphics. Our solution is not limited to spatial distortion and provides a unified model that makes it possible to define new focus+context interaction techniques based on lenses whose transition is defined by a combination of dynamic displacement and compositing functions. We present the results of a series of user evaluations that show that one such new lens, the speed-coupled blending lens, significantly outperforms all others.