Macro-operators: a weak method for learning
Artificial Intelligence - Lecture notes in computer science 178
The world modelers project: objectives and simulator architecture
Machine learning: a guide to current research
The Design of an Optimizing Compiler
The Design of an Optimizing Compiler
Chunking in Soar: The Anatomy of a General Learning Mechanism
Machine Learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
The utility of ebl in recursive domain theories
AAAI'90 Proceedings of the eighth National conference on Artificial intelligence - Volume 2
AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
Canadian AI'11 Proceedings of the 24th Canadian conference on Advances in artificial intelligence
Inductive rule learning on the knowledge level
Cognitive Systems Research
Hi-index | 0.00 |
Inducing disjunctive and iterative macro-operators from empirical problem-solving traces provides a more powerful knowledge compilation method than simple linear macro-operators. Whereas earlier work focused on when to create iterative macro-operators, this paper addresses how to form them, combining proven optimization methods such as extraction of loop invariants, with techniques for further optimizing RETEmatch efficiency. The disjunctive and iterative composition processes have been implemented in FERMI and its underlying production system language. Empirical results confirm substantial rule-match speedups and system performance improvements in different application domains.