Real-time auditory and visual talker tracking through integrating EM algorithm and particle filter

  • Authors:
  • Hyun-Don Kim;Kazunori Komatani;Tetsuya Ogata;Hiroshi G. Okuno

  • Affiliations:
  • Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan;Department of Intelligence Science and Technology, Graduate School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • IEA/AIE'07 Proceedings of the 20th international conference on Industrial, engineering, and other applications of applied intelligent systems
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents techniques that enable a talker tracking for effective human-robot interaction. We propose new way of integrating an EM algorithm and a particle filter to select an appropriate path for tracking the talker. It can easily adapt to new kinds of information for tracking the talker with our system. This is because our system estimates the position of the desired talker through means, variances, and weights calculated from EM training regardless of the numbers or kinds of information. In addition, to enhance a robot's ability to track a talker in real-world environments, we applied the particle filter to talker tracking after executing the EM algorithm. We also integrated a variety of auditory and visual information regarding sound localization, face localization, and the detection of lip movement. Moreover, we applied a sound classification function that allows our system to distinguish between voice, music, or noise. We also developed a vision module that can locate moving objects.