Communications of the ACM
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Models of incremental concept formation
Artificial Intelligence
Learning Automata from Ordered Examples
Machine Learning - Connectionist approaches to language learning
Types of monotonic language learning and their characterization
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Language learning in dependence on the space of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
An incremental concept formation approach for learning from databases
Theoretical Computer Science - Special issue on formal methods in databases and software engineering
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning from texts: mindchanges, limited memory, and monotonicity
Information and Computation
Incremental learning from positive data
Journal of Computer and System Sciences
Theoretical Computer Science - Special issue on algorithmic learning theory
Incremental concept learning for bounded data mining
Information and Computation
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
Selecting Examples for Partial Memory Learning
Machine Learning
Incremental Induction of Decision Trees
Machine Learning
Machine Learning
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Formal languages and their relation to automata
Formal languages and their relation to automata
Learning indexed families of recursive languages from positive data: A survey
Theoretical Computer Science
Hi-index | 5.23 |
We investigate the principal learning capabilities of iterative learners in some more details. Thereby, we confine ourselves to study the learnability of indexable concept classes. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element of a text (an informant) for a target concept as well as its previously made hypothesis and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept.We study two variants of this basic scenario and compare the learning capabilities of all resulting models of iterative learning to one another as well to the standard learning models finite inference, conservative identification, and learning in the limit.First, we consider the case that an iterative learner has to learn from fat texts (fat informants), only. In this setting, it is guaranteed that relevant information is, in principle, accessible at any time in the learning process. Second, we study a variant of iterative learning, where an iterative learner is supposed to learn no matter which initial hypothesis is actually chosen. This variant is suited to describe scenarios that are typical for case-based reasoning.