An empirical study of sensing and defaulting in planning
Proceedings of the first international conference on Artificial intelligence planning systems
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Ultimate History of Video Games: From Pong to Pokemon--the Story behind the Craze That Touched Our Lives and Changed the World
A Gesture Based Interface for Human-Robot Interaction
Autonomous Robots
Experiences with a mobile robotic guide for the elderly
Eighteenth national conference on Artificial intelligence
Recognition Approach to Gesture Language Understanding
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Fourier tags: Smoothly degradable fiducial markers for use in human-robot interaction
CRV '07 Proceedings of the Fourth Canadian Conference on Computer and Robot Vision
Reinforcement learning with limited reinforcement: using Bayes risk for active learning in POMDPs
Proceedings of the 25th international conference on Machine learning
Spoken language interaction with model uncertainty: an adaptive human-robot interaction system
Connection Science - Language and Robots
Robust servo-control for underwater robots using banks of visual filters
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Recognizing and interpreting gestures on a mobile robot
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 2
Spatial language for human-robot dialogs
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
Hi-index | 0.00 |
This paper describes the performance evaluation of a machine vision-based human-robot interaction framework, particularly those involving human-interface studies. We describe a visual programming language called RoboChat, and a complimentary dialog engine which evaluates the need for confirmation based on utility and risk. Together, RoboChat and the dialog mechanism enable a human operator to send a series of complex instructions to a robot, with the assurance of confirmations in case of high task-cost or command uncertainty, or both. We have performed extensive human-interface studies to evaluate the usability of this framework, both in controlled laboratory conditions and in a variety of outdoors environments. One specific goal for the RoboChat scheme was to aid a scuba diver to operate and program an underwater robot in a variety of deployment scenarios, and the real-world validations were thus performed on-board the Aqua amphibious robot [4], in both underwater and terrestrial environments. The paper describes the details of the visual human-robot interaction framework, with an emphasis on the RoboChat language and the confirmation system, and presents a summary of the set of performance evaluation experiments performed both on- and off-board the Aqua vehicle.