Theory refinement and Natural Language Learning
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 1
The multilingual named entity recognition framework
EACL '03 Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics - Volume 2
ALLiS: a symbolic learning system for Natural Language Learning
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
ConLL '01 Proceedings of the 2001 workshop on Computational Natural Language Learning - Volume 7
Backward chaining rule induction
Intelligent Data Analysis - Selected papers from IDA2005, Madrid, Spain
Rerepresenting and restructuring domain theories: a constructive induction approach
Journal of Artificial Intelligence Research
Flexibly exploiting prior knowledge in empirical learning
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Searching for meaningful feature interactions with backward-chaining rule induction
IDA'05 Proceedings of the 6th international conference on Advances in Intelligent Data Analysis
Hi-index | 0.00 |
This paper describes and evaluates an approach to combining empirical and explanation-based learning called Induction Over the Unexplained (IOU). IOU is intended for learning concepts that can be partially explained by an overly-general domain theory. An eclectic evaluation of the method is presented which includes results from all three major approaches: empirical, theoretical, and psychological. Empirical results show that IOU is effective at refining overly-general domain theories and that it learns more accurate concepts from fewer examples than a purely empirical approach. The application of theoretical results from PAC learnability theory explains why IOU requires fewer examples. IOU is also shown to be able to model psychological data demonstrating the effect of background knowledge on human learning.