“Put-that-there”: Voice and gesture at the graphics interface
SIGGRAPH '80 Proceedings of the 7th annual conference on Computer graphics and interactive techniques
Ubiquitous talker: spoken language interaction with real world objects
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Bayesian networks for speech and image integration
Eighteenth national conference on Artificial intelligence
Evaluating Integrated Speech- and Image Understanding
ICMI '02 Proceedings of the 4th IEEE International Conference on Multimodal Interfaces
Combining speech and haptics for intuitive and efficient navigation through image databases
Proceedings of the 5th international conference on Multimodal interfaces
Hi-index | 0.00 |
This paper proposes a method of removing ambiguities in robot tasks by a multimodal human-robot interface consisting of verbal and nonverbal communication. Such ambiguities often arise from failures of the robot vision system. However, it is not easy to solve this problem only by improving computer vision techniques. Thus, our robot asks a human such a question that a natural reply to it will contain helpful information to adapt the vision system for the current situation. We present a robot system that can bring the object ordered by a human by verbal and nonverbal behaviors.