AudioHaptics: audio and haptic rendering based on a physical model

  • Authors:
  • Hiroaki Yano;Hiromi Igawa;Toshihiro Kameda;Koichi Muzutani;Hiroo Iwata

  • Affiliations:
  • University of Tsukuba;University of Tsukuba;University of Tsukuba;University of Tsukuba;University of Tsukuba

  • Venue:
  • HAPTICS'04 Proceedings of the 12th international conference on Haptic interfaces for virtual environment and teleoperator systems
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose a method for the synthesis of haptic and auditory senses that is based on a physical model called AudioHaptics. We have developed a haptic environment that incorporates auditory sensation. We achieved this by fitting a speaker at the end effecter of a haptic interface. The FEM (Finite Element method) was used to calculate the vibration of a virtual object when an impact is occurred, and the sound pressure data at the speaker position was then calculated based on the 2D complex amplitude of the object surface in real time. The AudioHaptics system can generate sounds originating from virtual objects, which can have arbitrary shapes, attributes and inner structures. Experiments for evaluation with real users demonstrated that this method is effective for rendering audio and haptic sensation.