Audio/video fusion for objects recognition

  • Authors:
  • Loic Lacheze;Yan Guo;Ryad Benosman;Bruno Gas;Charlie Couverture

  • Affiliations:
  • UPMC Univ Paris 06, UMR, Paris, France;UPMC Univ Paris 06, UMR, Paris, France;UPMC Univ Paris 06, UMR, Paris, France;UPMC Univ Paris 06, UMR, Paris, France;UPMC Univ Paris 06, UMR, Paris, France

  • Venue:
  • IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

In mobile robotics applications, pattern and object recognition are mainly achieved relying only on vision. Several other perceptual modalities are also available such as, touch, hearing or vestibular proprioception. They are rarely used and can provide valuable additional information within the recognition tasks. This article presents an analysis of several methods of fusion of perceptual and auditory modalitites. It relies on the use of a perspective camera and a microphone on a moving object recognition problem. Experimental data are also provided on a database of audio/visual objects including cases of visual occlusions and audio corruptions.