Agents that learn to explain themselves
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Hierarchical planning in BDI agent programming languages: a formal approach
AAMAS '06 Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems
2APL: a practical agent programming language
Autonomous Agents and Multi-Agent Systems
A methodology for developing self-explaining agents for virtual training
Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems - Volume 2
An explainable artificial intelligence system for small-unit tactical behavior
IAAI'04 Proceedings of the 16th conference on Innovative applications of artifical intelligence
Agents, multi-agent systems and declarative programming: what, when, where, why, who, how?
A 25-year perspective on logic programming
Do you get it? user-evaluated explainable BDI agents
MATES'10 Proceedings of the 8th German conference on Multiagent system technologies
Explanation and trust: what to tell the user in security and AI?
Ethics and Information Technology
Agent-based museum and tour guides: applying the state of the art
Proceedings of The 8th Australasian Conference on Interactive Entertainment: Playing the System
Hi-index | 0.00 |
Virtual training systems provide an effective means to train people for complex, dynamic tasks such as crisis management or firefighting. Intelligent agents are often used to play the characters with whom a trainee interacts. To increase the trainee's understanding of played scenarios, several accounts of agents that can explain the reasons for their actions have been proposed. This paper describes an empirical study of what instructors consider useful agent explanations for trainees. It was found that different explanations types were preferred for different actions, e.g. conditions enabling action execution, goals underlying an action, or goals that become achievable after action execution. When an action has important consequences for other agents, instructors suggest that the others' perspectives should be part of the explanation.