Trust by design: information requirements for appropriate trust in automation
CASCON '06 Proceedings of the 2006 conference of the Center for Advanced Studies on Collaborative research
Building explainable artificial intelligence systems
IAAI'06 Proceedings of the 18th conference on Innovative applications of artificial intelligence - Volume 2
Expressing thought: improving robot readability with animation principles
Proceedings of the 6th international conference on Human-robot interaction
A robotic world model framework designed to facilitate human-robot communication
SIGDIAL '11 Proceedings of the SIGDIAL 2011 Conference
Automatic processing of irrelevant co-speech gestures with human but not robot actors
Proceedings of the 8th ACM/IEEE international conference on Human-robot interaction
Hi-index | 0.00 |
To increase human trust in robots, we have developed a system that provides insight into robotic behaviors by enabling a robot to answer questions people pose about its actions (e.g., Q: Why did you turn left there? A: "I detected a person at the end of the hallway."). Our focus is on generation of this explanation in human-understandable terms despite the mathematical, robot-specific representation and planning system used by the robot to make its decisions and execute its actions. We present our work to date on this topic, including system design and experiments, and discuss areas for future work.