Haptic and sound grid for enhanced positioning in a 3-D virtual environment

  • Authors:
  • Seung-Chan Kim;Dong-Soo Kwon

  • Affiliations:
  • Human Robot Interaction Research Center, KAIST, Daejon, Republic of Korea;Human Robot Interaction Research Center, KAIST, Daejon, Republic of Korea

  • Venue:
  • HAID'07 Proceedings of the 2nd international conference on Haptic and audio interaction design
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

As images are projected onto the flat retina when identifying objects scattered in space, there may be considerable ambiguity in depth (i.e. z-direction) perception. Therefore, position information can be distorted, especially along the z-axis. In this paper, virtual grids using haptic and auditory feedback are proposed to complement ambiguous visual depth cues. This study experimentally investigates the influence of virtual grids on position identification in a 3-D workspace. A haptic grid is generated using the PHANTOM® Omni™ and a sound grid is generated by changing the frequency characteristics of the sound source based on the hand movement of the operator. Both grids take the form of virtual planes placed at regular intervals of 10mm through three axes (i.e. x, y, and z). The haptic and sound grids are conveyed to subjects separately or simultaneously according to test conditions. In cases of bimodal presentation, the grids are displayed with cross-modal synchrony. The statistically significant results indicate that the presence of the grid in space increased the average values of precision. In particular, errors in the z-axis decreased by more than 50% (F=19.82, p