SOAR: an architecture for general intelligence
Artificial Intelligence
Implementing large production systems in a DBMS environment: concepts and algorithms
SIGMOD '88 Proceedings of the 1988 ACM SIGMOD international conference on Management of data
TREAT: a new and efficient match algorithm for AI production systems
TREAT: a new and efficient match algorithm for AI production systems
Learning approximate control rules of high utility
Proceedings of the seventh international conference (1990) on Machine learning
A preliminary analysis of the Soar architecture as a basis for general intelligence
Artificial Intelligence
Parallelism in Production Systems
Parallelism in Production Systems
Chunking in Soar: The Anatomy of a General Learning Mechanism
Machine Learning
Learning effective search control knowledge: an explanation-based approach
Learning effective search control knowledge: an explanation-based approach
Extracting knowledge from expert systems
IJCAI'83 Proceedings of the Eighth international joint conference on Artificial intelligence - Volume 1
Scaling up logic-based truth maintenance systems via fact garbage collection
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
This paper examines several systems which learn a large number of rules (productions), including one which learns 113,938 rules - the largest number ever learned by an AI system, and the largest number in any production system in existence. It is important to match these rules dficiently, in order to avoid the machine learning utility problem. Moreover, examination of such large systems reveals new phenomena and calls into question some common assumptions based on previous observations of smalkr systems. We first show that the Rete and Treat match algorithms do not scale well with the number of rules in our systems, in part because the number of rules affected by a change to working memory increases with the total number of rules in these systems. We also show that the sharing of nodes in the beta part of the Rete network becomes more and more important as the number of rules increases. Finally, we describe and evaluate a new optimization for Rete which improves its scalability and allows two of our systems to learn over 100,000 rules without significant performance degradation.