Exploiting Speech/Gesture Co-occurrence for Improving Continuous Gesture Recognition in Weather Narration

  • Authors:
  • Rajeev Sharma;Jiongyu Cai;Srivat Chakravarthy;Indrajit Poddar;Yogesh Sethi

  • Affiliations:
  • -;-;-;-;-

  • Venue:
  • FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

In order to incorporate naturalness in the design of Human Computer Interfaces (HCI), it is desirable to develop recognition techniques capable of handling continuous natural gesture and speech inputs. Though many different researchers have reported high recognition rates for gesture recognition using Hidden Markov Models (HMMs), the gestures used are mostly pre-defined and are bound with syntactical and grammatical constraints. But natural gestures do not string together in syntactical bindings. Moreover, strict classification of natural gestures is not feasible.In this paper we have examined hand gestures made in a very natural domain, that of a weather person narrating in front of a weather map. The gestures made by the weather person are embedded in a narration. This provides us with abundant data from an uncontrolled environment to study the interaction between speech and gesture in the context of a display. We hypothesize that this domain is very similar to that of a natural human-computer interface. We present an HMMs architecture for continuous gesture recognition framework and keyword spotting. To explore the relation between gesture and speech, we conducted a statistical co-occurrence analysis of different gestures with a selected set of spoken keywords. We then demonstrate how this co-occurrence analysis can be exploited to improve the performance of continuous gesture recognition.