Understanding Inexplicit Utterances Using Vision for Helper Robots

  • Authors:
  • Zaliyana Mohd Hanafiah;Chizu Yamazaki;Akio Nakamura;Yoshinori Kuno

  • Affiliations:
  • Saitama University, Japan;Saitama University, Japan;Saitama University, Japan;Saitama University, Japan

  • Venue:
  • ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 4 - Volume 04
  • Year:
  • 2004

Quantified Score

Hi-index 0.00

Visualization

Abstract

Speech interfaces should have a capability of dealing with inexplicit utterances including such as ellipsis and deixis since they are common phenomena in our daily conversation. Their resolution using context and a priori knowledge has been investigated in the fields of natural language and speech understanding. However, there are utterances that cannot be understood by such symbol processing alone. In this paper, we consider inexplicit utterances caused from the fact that humans have vision. If we are certain that the listeners share some visual information, we often omit or mention ambiguously things about it in our utterances. We propose a method of understanding speech with such ambiguities using computer vision. It tracks the human's gaze direction, detecting objects in the direction. It also recognizes the human's actions. Based on these bits of visual information, it understands the human's inexplicit utterances. Experimental results show that the method helps to realize human-friendly speech interfaces.