Communications of the ACM
Types of monotonic language learning and their characterization
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Language learning in dependence on the space of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Language learning from texts (extended abstract): mind changes, limited memory and monotonicity
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Incremental learning from positive data
Journal of Computer and System Sciences
A Machine-Independent Theory of the Complexity of Recursive Functions
Journal of the ACM (JACM)
Machine Learning
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Formal languages and their relation to automata
Formal languages and their relation to automata
Hi-index | 0.00 |
Within the present paper, we investigate the principal learning capabilities of iterative learners in some more details. The general scenario of iterative learning is as follows. An iterative learner successively takes as input one element of a text (an informant) of a target concept as well as its previously made hypothesis, and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the target concept. We study the following variants of this basic scenario. First, we consider the case that an iterative learner has to learn on redundant texts or informants, only. A text (an informant) is redundant, if it contains every data item infinitely many times. This approach guarantees that relevant information is, in principle, accessible at any time in the learning process. Second, we study a version of iterative learning, where an iterative learner is supposed to learn independent on the choice of the initial hypothesis. In contrast, in the basic scenario of iterative inference, it is assumed that the initial hypothesis is the same for every learning task which allows certain coding tricks. We compare the learning capabilities of all models of iterative learning from text and informant, respectively, to one another as well as to finite inference, conservative identification, and learning in the limit from text and informant, respectively.