Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Experiences with an interactive museum tour-guide robot
Artificial Intelligence - Special issue on applications of artificial intelligence
Case-Based Reasoning: Experiences, Lessons and Future Directions
Case-Based Reasoning: Experiences, Lessons and Future Directions
Dynamic Programming
The focussed D* algorithm for real-time replanning
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
CBR for state value function approximation in reinforcement learning
ICCBR'05 Proceedings of the 6th international conference on Case-Based Reasoning Research and Development
Hi-index | 0.00 |
Two main challenges of robot action planning in real domains are uncertain action effects and dynamic environments. In this paper, an instance-basedaction model is learned empirically by robots trying actions in the environment. Modeling the action planning problem as a Markov decision process, the action model is used to build the transition function. In static environments, standard value iteration techniques are used for computing the optimal policy. In dynamic environments, an algorithm is proposed for fast replanning, which updates a subset of state-action values computed for the static environment. As a test-bed, the goal scoring task in the RoboCup 4-legged scenario is used. The algorithms are validated in the problem of planning kicks for scoring goals in the presence of opponent robots. The experimental results both in simulation and on real robots show that the instance-based action model boosts performance over using parametric models as done previously, and also incremental replanning significantly improves over original off-line planning.