Evaluating 3D task performance for fish tank virtual worlds
ACM Transactions on Information Systems (TOIS)
CHI '93 Proceedings of the INTERACT '93 and CHI '93 Conference on Human Factors in Computing Systems
A study of interactive 3D point location in a computer simulated virtual environment
VRST '97 Proceedings of the ACM symposium on Virtual reality software and technology
ACM Transactions on Computer-Human Interaction (TOCHI)
Perceiving Spatial Relationships in Computer-Generated Images
IEEE Computer Graphics and Applications
Spatial input/display correspondence in a stereoscopic computer graphic work station
SIGGRAPH '83 Proceedings of the 10th annual conference on Computer graphics and interactive techniques
Optical Tracking Using Projective Invariant Marker Pattern Properties
VR '03 Proceedings of the IEEE Virtual Reality 2003
The importance of stereo and eye-coupled perspective for eye-hand coordination in fish tank VR
Presence: Teleoperators and Virtual Environments
Virtual Workbench: Near-Field Virtual Environment System with Applications
Presence: Teleoperators and Virtual Environments
Structures of life: the role of molecular structures in scientists' work
ECCE '08 Proceedings of the 15th European conference on Cognitive ergonomics: the ergonomics of cool interaction
Presence: Teleoperators and Virtual Environments
Usability of concrete and virtual models in chemistry instruction
Computers in Human Behavior
Hi-index | 0.00 |
Effective interaction in a virtual environment requires that the user can adequately judge the spatial relationships between the objects in a 3D scene. In order to accomplish adequate depth perception, existing virtual environments create useful perceptual cues through stereoscopy, motion parallax and (active or passive) haptic feedback. Specific hardware, such as high-end monitors with stereoscopic glasses, head-mounted tracking and mirrors are required to accomplish this. Many potential VR users however refuse to wear cumbersome devices and to adjust to an imposed work environment, especially for longer periods of time. It is therefore important to quantify the repercussions of dropping one or more of the above technologies. These repercussions are likely to depend on the application area, so that comparisons should be performed on tasks that are important and/or occur frequently in the application field of interest. In this paper, we report on a formal experiment in which the effects of different hardware components on the speed and accuracy of three-dimensional (3D) interaction tasks are established. The tasks that have been selected for the experiment are inspired by interactions and complexities, as they typically occur when exploring molecular structures. From the experimental data, we develop linear regression models to predict the speed and accuracy of the interaction tasks. Our findings show that hardware supported depth cues have a significant positive effect on task speed and accuracy, while software supported depth cues, such as shadows and perspective cues, have a negative effect on trial time. The task trial times are smaller in a simple fish-tank like desktop environment than in a more complex co-location enabled environment, sometimes at the cost of reduced accuracy.