Interacting with molecular structures: user performance versus system complexity

  • Authors:
  • R. van Liere;J.-B. O. S. Martens;A. J. F. Kok;M. H. A. V. van Tienen

  • Affiliations:
  • Center for Mathematics and Computer Science, CWI, Amsterdam and Department of Mathematics and Computer Science, Eindhoven University of Technology;Department of Industrial Design, Eindhoven University of Technology;Department of Mathematics and Computer Science, Eindhoven University of Technology;Department of Mathematics and Computer Science, Eindhoven University of Technology

  • Venue:
  • EGVE'05 Proceedings of the 11th Eurographics conference on Virtual Environments
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Effective interaction in a virtual environment requires that the user can adequately judge the spatial relationships between the objects in a 3D scene. In order to accomplish adequate depth perception, existing virtual environments create useful perceptual cues through stereoscopy, motion parallax and (active or passive) haptic feedback. Specific hardware, such as high-end monitors with stereoscopic glasses, head-mounted tracking and mirrors are required to accomplish this. Many potential VR users however refuse to wear cumbersome devices and to adjust to an imposed work environment, especially for longer periods of time. It is therefore important to quantify the repercussions of dropping one or more of the above technologies. These repercussions are likely to depend on the application area, so that comparisons should be performed on tasks that are important and/or occur frequently in the application field of interest. In this paper, we report on a formal experiment in which the effects of different hardware components on the speed and accuracy of three-dimensional (3D) interaction tasks are established. The tasks that have been selected for the experiment are inspired by interactions and complexities, as they typically occur when exploring molecular structures. From the experimental data, we develop linear regression models to predict the speed and accuracy of the interaction tasks. Our findings show that hardware supported depth cues have a significant positive effect on task speed and accuracy, while software supported depth cues, such as shadows and perspective cues, have a negative effect on trial time. The task trial times are smaller in a simple fish-tank like desktop environment than in a more complex co-location enabled environment, sometimes at the cost of reduced accuracy.