Communications of the ACM
Models of incremental concept formation
Artificial Intelligence
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
On the power of incremental learning
Theoretical Computer Science
Machine Learning
Machine Learning
Polynomial Time Inference of Extended Regular Pattern Languages
Proceedings of RIMS Symposium on Software Science and Engineering
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Learning in the limit with lattice-structured hypothesis spaces
Theoretical Computer Science
Hi-index | 0.00 |
The present study aims at insights into the nature of incremental learning in the context of Gold's model of identification in the limit. With a focus on natural requirements such as consistency and conservativeness, incremental learning is analysed both for learning from positive examples and for learning from positive and negative examples. The results obtained illustrate in which way different consistency and conservativeness demands can affect the capabilities of incremental learners. These results may serve as a first step towards characterising the structure of typical classes learnable incrementally and thus towards elaborating uniform incremental learning methods.