Planning and control
Technical Note: \cal Q-Learning
Machine Learning
Learning to act using real-time dynamic programming
Artificial Intelligence - Special volume on computational research on interaction and agency, part 1
Learning metric-topological maps for indoor mobile robot navigation
Artificial Intelligence
Learning situation-dependent costs: improving planning from probabilistic robot execution
AGENTS '98 Proceedings of the second international conference on Autonomous agents
Map learning and high-speed navigation in RHINO
Artificial intelligence and mobile robots
Elastic bands for nonholonomic car-like robots: algorithms and combinatorial issues
WAFR '98 Proceedings of the third workshop on the algorithmic foundations of robotics on Robotics : the algorithmic perspective: the algorithmic perspective
Structured reactive controllers: controlling robots that perform everyday activity
Proceedings of the third annual conference on Autonomous Agents
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Multiagent Mission Specification and Execution
Autonomous Robots
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Towards Real-Time Execution of Motion Tasks
The 2nd International Symposium on Experimental Robotics II
Decision-Theoretic, High-Level Agent Programming in the Situation Calculus
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
Learning How to Combine Sensory-Motor Modalities for a Robust Behavior
Revised Papers from the International Seminar on Advances in Plan-Based Control of Robotic Agents,
Automated Planning: Theory & Practice
Automated Planning: Theory & Practice
On the convergence of stochastic iterative dynamic programming algorithms
Neural Computation
Journal of Artificial Intelligence Research
Automatic Configuration of Multi-Robot Systems: Planning for Multiple Steps
Proceedings of the 2008 conference on ECAI 2008: 18th European Conference on Artificial Intelligence
Learning the behavior model of a robot
Autonomous Robots
The actor's view of automated planning and acting: A position paper
Artificial Intelligence
Robotics and artificial intelligence: A perspective on deliberation functions
AI Communications - ECAI 2012 Turing and Anniversary Track
Hi-index | 0.00 |
This article describes a system, called Robel, for defining a robot controller that learns from experience very robust ways of performing a high-level task such as ''navigate to''. The designer specifies a collection of skills, represented as hierarchical tasks networks, whose primitives are sensory-motor functions. The skills provide different ways of combining these sensory-motor functions to achieve the desired task. The specified skills are assumed to be complementary and to cover different situations. The relationship between control states, defined through a set of task-dependent features, and the appropriate skills for pursuing the task is learned as a finite observable Markov decision process (MDP). This MDP provides a general policy for the task; it is independent of the environment and characterizes the abilities of the robot for the task.