Learning one subprocedure per lesson
Artificial Intelligence
Proceedings of the sixth international workshop on Machine learning
A tight integration of deductive and inductive learning
Proceedings of the sixth international workshop on Machine learning
Combining empirical and analytical learning with version spaces
Proceedings of the sixth international workshop on Machine learning
Using determinations in EBL: a solution to the incomplete theory problem
Proceedings of the sixth international workshop on Machine learning
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Guiding induction with domain theories
Machine learning
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Explanation-Based Generalization: A Unifying View
Machine Learning
Machine Learning
Explanation-Based Learning: An Alternative View
Machine Learning
Experiments with Incremental Concept Formation: UNIMEM
Machine Learning
Explanation-Based Neural Network Learning for Robot Control
Advances in Neural Information Processing Systems 5, [NIPS Conference]
Theory Refinement with Noisy Data
Theory Refinement with Noisy Data
Formulating concepts and analogies according to purpose
Formulating concepts and analogies according to purpose
An Adaptive Learning Algorithm for Supervised Neural Network with Contour Preserving Classification
AICI '09 Proceedings of the International Conference on Artificial Intelligence and Computational Intelligence
Real-time sound source localization and separation based on active audio-visual integration
IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part I
Hi-index | 0.00 |
Many researchers have noted the importance of combining inductive and analytical learning, yet we still lack combined learning methods that are effective in practice. We present here a learning method that combines explanation-based learning from a previously learned approximate domain theory, together with inductive learning from observations. This method, called explanation-based neural network learning (EBNN), is based on a neural network representation of domain knowledge. Explanations are constructed by chaining together inferences from multiple neural networks. In contrast with symbolic approaches to explanation-based learning which extract weakest preconditions from the explanation, EBNN extracts the derivatives of the target concept with respect to the training example features. These derivatives summarize the dependencies within the explanation, and are used to bias the inductive learning of the target concept. Experimental results on a simulated robot control task show that EBNN requires significantly fewer training examples than standard inductive learning. Furthermore, the method is shown to be robust to errors in the domain theory, operating effectively over a broad spectrum from very strong to very weak domain theories.