Advances in Robust Multimodal Interface Design
IEEE Computer Graphics and Applications
MATCH: an architecture for multimodal dialogue systems
ACL '02 Proceedings of the 40th Annual Meeting on Association for Computational Linguistics
Spoken dialogue management using probabilistic reasoning
ACL '00 Proceedings of the 38th Annual Meeting on Association for Computational Linguistics
Multimodal Interfaces: A Survey of Principles, Models and Frameworks
Human Machine Interaction
The Hidden Information State model: A practical framework for POMDP-based spoken dialogue management
Computer Speech and Language
Case-based techniques used for dialogue understanding and planning in a human-robot dialogue system
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Point-based value iteration: an anytime algorithm for POMDPs
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
HephaisTK: a toolkit for rapid prototyping of multimodal interfaces
Proceedings of the 2009 international conference on Multimodal interfaces
A computational model of multi-modal grounding for human robot interaction
SigDIAL '06 Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue
Two-handed gesture recognition and fusion with speech to command a robot
Autonomous Robots
Enabling Multimodal Human–Robot Interaction for the Karlsruhe Humanoid Robot
IEEE Transactions on Robotics
From members to teams to committee-a robust approach to gestural and multimodal recognition
IEEE Transactions on Neural Networks
Attentional top-down regulation and dialogue management in human-robot interaction
Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
This paper presents a POMDP-based dialogue system for multimodal human-robot interaction (HRI). Our aim is to exploit a dialogical paradigm to allow a natural and robust interaction between the human and the robot. The proposed dialogue system should improve the robustness and the flexibility of the overall interactive system, including multimodal fusion, interpretation, and decision-making. The dialogue is represented as a Partially Observable Markov Decision Process (POMDPs) to cast the inherent communication ambiguity and noise into the dialogue model. POMDPs have been used in spoken dialogue systems, mainly for tourist information services, but their application to multimodal human-robot interaction is novel. This paper presents the proposed model for dialogue representation and the methodology used to compute a dialogue strategy. The whole architecture has been integrated on a mobile robot platform and has bee n tested in a human-robot interaction scenario to assess the overall performances with respect to baseline controllers.