Integrating physically based sound models in a multimodal rendering architecture: Research Articles

  • Authors:
  • Federico Avanzini;Paolo Crosato

  • Affiliations:
  • Department of Information Engineering, University of Padova, Via Gradenigo 6/A, 35131 Padova, Italy.;-

  • Venue:
  • Computer Animation and Virtual Worlds - CASA 2006
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a multimodal rendering architecture that integrates physically based sound models with haptic and visual rendering. The proposed sound modeling approach is compared to other existing techniques. An example of implementation of the architecture is presented, that realizes bimodal (auditory and haptic) rendering of contact stiffness. It is shown that the proposed rendering scheme allows tight synchronization of the two modalities, as well as a high degree of interactivity and responsiveness of the sound models to gestures and actions of a user. Finally, an experiment on the relative contributions of haptic and auditory information to bimodal judgments of contact stiffness is presented. Experimental results support the effectiveness of auditory feedback in modulating haptic perception of stiffness. Copyright © 2006 John Wiley & Sons, Ltd.