Proceedings of the seventh international conference (1990) on Machine learning
Learning by experimentation: the operator refinement method
Machine learning
An integrated architecture for learning and planning in robotic domains
ACM SIGART Bulletin
Automatic programming of behavior-based robots using reinforcement learning
Artificial Intelligence
Technical Note: \cal Q-Learning
Machine Learning
Automatic acquisition of task theories for robotic manipulation
Automatic acquisition of task theories for robotic manipulation
Acting optimally in partially observable stochastic domains
AAAI'94 Proceedings of the twelfth national conference on Artificial intelligence (vol. 2)
Real-world robotics: learning to plan for robust execution
Machine Learning - Special issue on robot learning
Learning concepts from sensor data of a mobile robot
Machine Learning - Special issue on robot learning
Lazy Incremental Learning of Control Knowledge for EfficientlyObtaining Quality Plans
Artificial Intelligence Review - Special issue on lazy learning
RoboCup: The Robot World Cup Initiative
AGENTS '97 Proceedings of the first international conference on Autonomous agents
Towards collaborative and adversarial learning:: a case study in robotic soccer
International Journal of Human-Computer Studies - Evolution and learning in multiagent systems
Planning and Learning by Analogical Reasoning
Planning and Learning by Analogical Reasoning
Learning Search Control Knowledge: An Explanation-Based Approach
Learning Search Control Knowledge: An Explanation-Based Approach
Discovery as Autonomous Learning from the Environment
Machine Learning
Using ABC2 in the RoboCup Domain
RoboCup-97: Robot Soccer World Cup I
Planning, Learning, and Executing in Autonomous Systems
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
Information Gathering Plans With Sensing Actions
ECP '97 Proceedings of the 4th European Conference on Planning: Recent Advances in AI Planning
Exploiting structure in policy construction
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Model minimization in Markov decision processes
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
ABC2 an Agenda Based Multi-Agent Model for Robots Control and Cooperation
Journal of Intelligent and Robotic Systems
Journal of Intelligent and Robotic Systems
International Journal of Intelligent Systems Technologies and Applications
Autonomous discovery of abstract concepts by a robot
ICANNGA'11 Proceedings of the 10th international conference on Adaptive and natural computing algorithms - Volume Part I
Learning by knowledge sharing in autonomous intelligent systems
IBERAMIA-SBIA'06 Proceedings of the 2nd international joint conference, and Proceedings of the 10th Ibero-American Conference on AI 18th Brazilian conference on Advances in Artificial Intelligence
Machine learning of plan robustness knowledge about instances
ECML'05 Proceedings of the 16th European conference on Machine Learning
Inductive Logic Programming and Embodied Agents: Possibilities and Limitations
International Journal of Agent Technologies and Systems
Hi-index | 0.00 |
Agents (hardware or software) that act autonomously in an environment have to be able to integrate three basic behaviors: planning, execution, and learning. This integration is mandatory when the agent has no knowledge about how its actions can affect the environment, how the environment reacts to its actions, or, when the agent does not receive as an explicit input, the goals it must achieve. Without an “a priori” theory, autonomous agents should be able to self-propose goals, set-up plans for achieving the goals according to previously learned models of the agent and the environment, and learn those models from past experiences of successful and failed executions of plans. Planning involves selecting a goal to reach and computing a set of actions that will allow the autonomous agent to achieve the goal. Execution deals with the interaction with the environment by application of planned actions, observation of resulting perceptions, and control of successful achievement of the goals. Learning is needed to predict the reactions of the environment to the agent actions, thus guiding the agent to achieve its goals more efficiently.In this context, most of the learning systems applied to problem solving have been used to learn control knowledge for guiding the search for a plan, but few systems have focused on the acquisition of planning operator descriptions. As an example, currently, one of the most used techniques for the integration of (a way of) planning, execution, and learning is reinforcement learning. However, they usually do not consider the representation of action descriptions, so they cannot reason in terms of goals and ways of achieving those goals.In this paper, we present an integrated architecture, lope, that learns operator definitions, plans using those operators, and executes the plans for modifying the acquired operators. The resulting system is domain-independent, and we have performed experiments in a robotic framework. The results clearly show that the integrated planning, learning, and executing system outperforms the basic planner in that domain.