An analytical comparison of some rule-learning programs
Artificial Intelligence
SOAR: an architecture for general intelligence
Artificial Intelligence
Explanation-based generalisation = partial evaluation
Artificial Intelligence
A tight integration of deductive and inductive learning
Proceedings of the sixth international workshop on Machine learning
One-sided algorithms for integrating empirical and explanation-based learning
Proceedings of the sixth international workshop on Machine learning
Combining empirical and analytical learning with version spaces
Proceedings of the sixth international workshop on Machine learning
Using determinations in EBL: a solution to the incomplete theory problem
Proceedings of the sixth international workshop on Machine learning
A preliminary analysis of the Soar architecture as a basis for general intelligence
Artificial Intelligence
A Study of Explanation-Based Methods for Inductive Learning
Machine Learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
Learning at the Knowledge Level
Machine Learning
Analysis of an extended concept-learning task
IJCAI'77 Proceedings of the 5th international joint conference on Artificial intelligence - Volume 1
A logical approach to reasoning by analogy
IJCAI'87 Proceedings of the 10th international joint conference on Artificial intelligence - Volume 1
IJCAI'87 Proceedings of the 10th international joint conference on Artificial intelligence - Volume 1
Hi-index | 0.00 |
When explanation-based learning (EBL) is used for knowledge level learning (KLL), training examples are essential, and EBL is not simply reducible to partial evaluation. A key enabling factor in this behavior is the use of domain theories in which not every element is believed a priori. When used with such domain theories EBL provides a basis for rote learning (deductive KLL) and induction from multiple examples (nondeductive KLL). This article lays the groundwork for using EBL in KLL, by describing how EBL can lead to increased belief, and describes new results from using Soar's chunking mechanism - a variation on EBL - as the basis for a task-independent rote learning capability and a version-space-based inductive capability. This latter provides a compelling demonstration of nondeductive KLL in Soar, and provides the basis for an integration of conventional EBL with induction. However, it also reveals how one of Soar's key assumptions - the non-penetrable memory assumption - makes this more complicated than it would otherwise be. This complexity may turn out to be appropriate, or it may point to where modifications of Soar are needed.