Communications of the ACM
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Learning automata from ordered examples
COLT '88 Proceedings of the first annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
Language learning in dependence on the space of hypotheses
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
An incremental concept formation approach for learning from databases
Theoretical Computer Science - Special issue on formal methods in databases and software engineering
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Language learning from texts (extended abstract): mind changes, limited memory and monotonicity
COLT '95 Proceedings of the eighth annual conference on Computational learning theory
Incremental learning from positive data
Journal of Computer and System Sciences
Advances in knowledge discovery and data mining
Advances in knowledge discovery and data mining
Theoretical Computer Science - Special issue on algorithmic learning theory
Incremental concept learning for bounded data mining
Information and Computation
Machine Learning
Getting Order Independence in Incremental Learning
ECML '93 Proceedings of the European Conference on Machine Learning
A Guided Tour Across the Boundaries of Learning Recursive Languages
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
Program Synthesis in the Presence of Infinite Number of Inaccuracies
AII '94 Proceedings of the 4th International Workshop on Analogical and Inductive Inference: Algorithmic Learning Theory
ALT '95 Proceedings of the 6th International Conference on Algorithmic Learning Theory
Vacillatory and BC Learning on Noisy Data
ALT '96 Proceedings of the 7th International Workshop on Algorithmic Learning Theory
Synthesizing Learners Tolerating Computable Noisy Data
ALT '98 Proceedings of the 9th International Conference on Algorithmic Learning Theory
Controlled Redundancy in Incremental Rule Learning
ECML '93 Proceedings of the European Conference on Machine Learning
Synthesizing Noise-Tolerant Language Learners
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Formal languages and their relation to automata
Formal languages and their relation to automata
Learning Recursive Concepts with Anomalies
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
A novel local patch framework for fixing supervised learning models
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
This paper provides a systematic study of incremental learning from noise-free and from noisy data, thereby distinguishing between learning from only positive data and from both positive and negative data. Our study relies on the notion of noisy data introduced in [22]. The basic scenario, named iterative learning, is as follows. In every learning stage, an algorithmic learner takes as input one element of an information sequence for a target concept and its previously made hypothesis and outputs a new hypothesis. The sequence of hypotheses has to converge to a hypothesis describing the target concept correctly. We study the following refinements of this scenario. Bounded example-memory inference generalizes iterative inference by allowing an iterative learner to additionally store an a priori bounded number of carefully chosen data elements, while feedback learning generalizes it by allowing the iterative learner to additionally ask whether or not a particular data element did already appear in the data seen so far. For the case of learning from noise-free data, we show that, where both positive and negative data are available, restrictions on the accessibility of the input data do not limit the learning capabilities if and only if the relevant iterative learners are allowed to query the history of the learning process or to store at least one carefully selected data element. This insight nicely contrasts the fact that, in case only positive data are available, restrictions on the accessibility of the input data seriously affect the capabilities of all types of incremental learning (cf. [18]). For the case of learning from noisy data, we present characterizations of all kinds of incremental learning in terms being independent from learning theory. The relevant conditions are purely structural ones. Surprisingly, where learning from only noisy positive data and from both noisy positive and negative data, iterative learners are already exactly as powerful as unconstrained learning devices.