Planning and control
Selection of relevant features and examples in machine learning
Artificial Intelligence - Special issue on relevance
Machine Learning
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Knowlege in action: logical foundations for specifying and implementing dynamical systems
Machine Learning
Tractable multiagent planning for epistemic goals
Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3
Planning and Control in Artificial Intelligence: A Unifying Perspective
Applied Intelligence
A perspective view and survey of meta-learning
Artificial Intelligence Review
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
Conformant planning via symbolic model checking and heuristic search
Artificial Intelligence
Learning Recursive Control Programs from Problem Solving
The Journal of Machine Learning Research
Learning to Evaluate Conditional Partial Plans
ICMLA '07 Proceedings of the Sixth International Conference on Machine Learning and Applications
Using theory completion to learn a robot navigation control program
ILP'02 Proceedings of the 12th international conference on Inductive logic programming
Jason induction of logical decision trees: a learning library and its application to commitment
MICAI'10 Proceedings of the 9th Mexican international conference on Advances in artificial intelligence: Part I
Hi-index | 0.00 |
We study agents situated in partially observable environments, who do not have the resources to create conformant plans. Instead, they create conditional plans which are partial, and learn from experience to choose the best of them for execution. Our agent employs an incomplete symbolic deduction system based on Active Logic and Situation Calculus for reasoning about actions and their consequences. An Inductive Logic Programming algorithm generalises observations and deduced knowledge in order to choose the best plan for execution. We show results of using PROGOL learning algorithm to distinguish "bad" plans, and we present three modifications which make the algorithm fit this class of problems better. Specifically, we limit the search space by fixing semantics of conditional branches within plans, we guide the search by specifying relative relevance of portions of knowledge base, and we integrate learning algorithm into the agent architecture by allowing it to directly access the agent's knowledge encoded in Active Logic. We report on experiments which show that those extensions lead to significantly better learning results.