Rapid Concept Learning for Mobile Robots
Autonomous Robots
Learning of plan execution policies for indoor navigation
AI Communications - Special issue on KI-2001
Mobile Robotics Planning Using Abstract Markov Decision Processes
ICTAI '99 Proceedings of the 11th IEEE International Conference on Tools with Artificial Intelligence
Region-based approximations for planning in stochastic domains
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Proximity-based non-uniform abstractions for approximate planning
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
Discrete Bayesian models have been used to model uncertainty for mobile-robot navigation, but the question of how actions should be chosen remains largely unexplored. This paper presents the optimal solution to the problem, formulated as a partially observable Markov decision process. Since solving for the optimal control policy is intractable, in general, it goes on to explore a variety of heuristic control strategies. The control strategies are compared experimentally, both in simulation and in runs on a robot.