I3D '92 Proceedings of the 1992 symposium on Interactive 3D graphics
Constrained 3D navigation with 2D controllers
VIS '97 Proceedings of the 8th conference on Visualization '97
The Task Gallery: a 3D window manager
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A display device abstraction for virtual reality applications
AFRIGRAPH '01 Proceedings of the 1st international conference on Computer graphics, virtual reality and visualisation
Unconstrained vs. Constrained 3D Scene Manipulation
EHCI '01 Proceedings of the 8th IFIP International Conference on Engineering for Human-Computer Interaction
VR '00 Proceedings of the IEEE Virtual Reality 2000 Conference
3D User Interfaces: Theory and Practice
3D User Interfaces: Theory and Practice
FaceSpace: endo- and exo-spatial hypermedia in the transparent video facetop
Proceedings of the fifteenth ACM conference on Hypertext and hypermedia
Cross-Dimensional Gestural Interaction Techniques for Hybrid Imrnersive Environments
VR '05 Proceedings of the 2005 IEEE Conference 2005 on Virtual Reality
Computer
Keepin' it real: pushing the desktop metaphor with physics, piles and the pen
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
Desktop environments represent a powerful user interface and have been used as the de facto standard human-computer interaction paradigm for over 20 years. But the rising demand of 3D applications dealing with complex datasets exceeds the capabilities of traditional interaction devices and two-dimensional displays. Such applications need more immersive and intuitive interfaces. In order to be accepted by the users, technology-driven solutions that require inconvenient instrumentation, e.g., stereo glasses or tracked gloves, should be avoided. Autostereoscopic display environments equipped with tracking systems enable humans to experience virtual 3D environments more naturally, for instance via gestures, without having to use annoying devices. However, currently these approaches are used only for specially designed or adapted applications. In this paper we introduce new 3D user interface concepts for such setups which require minimal instrumentation of the user and can be integrated easily in everyday working environments. We propose an interaction framework which supports simultaneous display of and simultaneous interaction with both monoscopic as well as stereoscopic contents. We identify the challenges for combined mouse-, keyboard- and gesture-based input paradigms in such an environment and introduce novel interaction strategies.