The human-robot interaction operating system
Proceedings of the 1st ACM SIGCHI/SIGART conference on Human-robot interaction
An affective guide robot in a shopping mall
Proceedings of the 4th ACM/IEEE international conference on Human robot interaction
Estimating user's engagement from eye-gaze behaviors in human-agent conversations
Proceedings of the 15th international conference on Intelligent user interfaces
SDCTD '12 NAACL-HLT Workshop on Future Directions and Needs in the Spoken Dialog Community: Tools and Data
Hi-index | 0.00 |
In this paper, we address Townsurfer, a situated multi-modal dialog system in vehicles. The system integrates multi-modal inputs of speech, geo-location, gaze (face direction) and dialog history to answer drivers' queries about their surroundings. To select appropriate data source used to answer queries, we apply belief tracking across the above modalities. We conducted a preliminary data collection and an evaluation focusing on the effect of gaze (head irection) and geo-location estimations. We report the result and analysis on the data.