Markov Decision Processes: Discrete Stochastic Dynamic Programming
Markov Decision Processes: Discrete Stochastic Dynamic Programming
Graphical Explanation in Bayesian Networks
ISMDA '00 Proceedings of the First International Symposium on Medical Data Analysis
Decision Making in Qualitative Influence Diagrams
Proceedings of the Eleventh International Florida Artificial Intelligence Research Society Conference
Dynamic Programming
Equivalence notions and model minimization in Markov decision processes
Artificial Intelligence - special issue on planning with uncertainty and incomplete information
Developing and empirically evaluating robust explanation generators: the KNIGHT experiments
Computational Linguistics
Variable resolution discretization for high-accuracy solutions of optimal control problems
IJCAI'99 Proceedings of the 16th international joint conference on Artificial intelligence - Volume 2
Model minimization in Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Explanation of Bayesian Networks and Influence Diagrams in Elvira
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
A natural language argumentation interface for explanation generation in Markov decision processes
ADT'11 Proceedings of the Second international conference on Algorithmic decision theory
ACM Transactions on Interactive Intelligent Systems (TiiS)
Hi-index | 0.00 |
In this paper we address the problem of explaining the recommendations generated by a Markov decision process (MDP). We propose an automatic explanation generation mechanism that is composed by two main stages. In the first stage, the most relevant variable given the current state is obtained, based on a factored representation of the MDP. The relevant variable is defined as the factor that has the greatest impact on the utility given certain state and action, and is a key element in the explanation generation mechanism. In the second stage, based on a general template, an explanation is generated by combing the information obtained from the MDP with domain knowledge represented as a frame system. The state and action given by the MDP, as well as the relevant variable, are used as pointers to the knowledge base to extract the relevant information and fill---in the explanation template. In this way, explanations of the recommendations given by the MDP can be generated on---line and incorporated to an intelligent assistant. We have evaluated this mechanism in an intelligent assistant for power plant operator training. The experimental results show that the automatically generated explanations are similar to those given by a domain expert.