Situated multi-modal dialog system in vehicles

  • Authors:
  • Teruhisa Misu;Antoine Raux;Ian Lane;Joan Devassy;Rakesh Gupta

  • Affiliations:
  • Honda Research Institute USA, Inc., Mountain View, CA, USA;Honda Research Institute USA, Inc., Mountain View, CA, USA;Carnegie Mellon University, Mountain View, CA, USA;Georgia Institute of Technology, Atlanta, GA, USA;Honda Research Institute USA, Inc., Mountain View, CA, USA

  • Venue:
  • Proceedings of the 6th workshop on Eye gaze in intelligent human machine interaction: gaze in multimodal interaction
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we address Townsurfer, a situated multi-modal dialog system in vehicles. The system integrates multi-modal inputs of speech, geo-location, gaze (face direction) and dialog history to answer drivers' queries about their surroundings. To select appropriate data source used to answer queries, we apply belief tracking across the above modalities. We conducted a preliminary data collection and an evaluation focusing on the effect of gaze (head irection) and geo-location estimations. We report the result and analysis on the data.