Multilevel Integration of Vision and Speech Understanding Using Bayesian Networks

  • Authors:
  • Sven Wachsmuth;Hans Brandt-Pook;Gudrun Socher;Franz Kummert;Gerhard Sagerer

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • ICVS '99 Proceedings of the First International Conference on Computer Vision Systems
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The interaction of image and speech processing is a crucial property of multimedia systems. Classical systems using inferences on pure qualitative high level descriptions miss a lot of information when concerned with erroneous, vague, or incomplete data. We propose a new architecture that integrates various levels of processing by using multiple representations of the visually observed scene. They are vertically connected by Bayesian networks in order to find the most plausible interpretation of the scene. The interpretation of a spoken utterance naming an object in the visually observed scene is modeled as another partial representation of the scene. Using this concept, the key problem is the identification of the verbally specified object instances in the visually observed scene. Therefore, a Bayesian network is generated dynamically from the spoken utterance and the visual scene representation. In this network spatial knowledge as well as knowledge extracted from psycholinguistic experiments is coded. First results show the robustness of our approach.