Bounded Parameter Markov Decision Processes
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
Multisensor triplet Markov chains and theory of evidence
International Journal of Approximate Reasoning
An Evidential Measure of Risk in Evidential Markov Chains
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
Planning under risk and Knightian uncertainty
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Algebraic Markov decision processes
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
On Finding Compromise Solutions in Multiobjective Markov Decision Processes
Proceedings of the 2010 conference on ECAI 2010: 19th European Conference on Artificial Intelligence
Unsupervised restoration of hidden nonstationary Markov chains using evidential priors
IEEE Transactions on Signal Processing - Part II
Hi-index | 0.00 |
This paper proposes a new model, the EMDP (Evidential Markov Decision Process). It is a MDP (Markov Decision Process) for belief functions in which rewards are defined for each state transition, like in a classical MDP, whereas the transitions are modeled as in an EMC (Evidential Markov Chain), i.e. they are sets transitions instead of states transitions. The EMDP can fit to more applications than a MDPST (MDP with Set-valued Transitions). Generalizing to belief functions allows us to cope with applications with high uncertainty (imprecise or lacking data) where probabilistic approaches fail. Implementation results are shown on a search-and-rescue unmanned rotorcraft benchmark.