Systems that learn: an introduction to learning theory for cognitive and computer scientists
Systems that learn: an introduction to learning theory for cognitive and computer scientists
Theory of recursive functions and effective computability
Theory of recursive functions and effective computability
Identification of unions of languages drawn from an identifiable class
COLT '89 Proceedings of the second annual workshop on Computational learning theory
Polynomial-time inference of arbitrary pattern languages
New Generation Computing - Selected papers from the international workshop on algorithmic learning theory,1990
The formal semantics of programming languages: an introduction
The formal semantics of programming languages: an introduction
Computability, complexity, and languages (2nd ed.): fundamentals of theoretical computer science
Computability, complexity, and languages (2nd ed.): fundamentals of theoretical computer science
A machine discovery from amino acid sequences by decision trees over regular patterns
Selected papers of international conference on Fifth generation computer systems 92
Regular Article: Open problems in “systems that learn”
Proceedings of the 30th IEEE symposium on Foundations of computer science
Incremental learning from positive data
Journal of Computer and System Sciences
Incremental concept learning for bounded data mining
Information and Computation
Predictive learning models for concept drift
Theoretical Computer Science - Algorithmic learning theory
Machine Inductive Inference and Language Identification
Proceedings of the 9th Colloquium on Automata, Languages and Programming
Discovering Unbounded Unions of Regular Pattern Languages from Positive Examples (Extended Abstract)
ISAAC '96 Proceedings of the 7th International Symposium on Algorithms and Computation
Algorithmic Learning for Knowledge-Based Systems, GOSLER Final Report
MDL learning of unions of simple pattern languages from positive examples
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
U-shaped, iterative, and iterative-with-counter learning
COLT'07 Proceedings of the 20th annual conference on Learning theory
Memory-limited u-shaped learning
COLT'06 Proceedings of the 19th annual conference on Learning Theory
Iterative Learning of Simple External Contextual Languages
ALT '08 Proceedings of the 19th international conference on Algorithmic Learning Theory
Learning Left-to-Right and Right-to-Left Iterative Languages
ICGI '08 Proceedings of the 9th international colloquium on Grammatical Inference: Algorithms and Applications
ACL '10 Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
Hi-index | 0.00 |
Iterative learning($\textbf{It}$-learning) is a Gold-style learning model in which each of a learner's output conjectures may depend onlyupon the learner's currentconjecture and the currentinput element. Two extensions of the $\textbf{It}$-learning model are considered, each of which involves parallelism. The first is to run, in parallel, distinct instantiations of a single learner on each input element. The second is to run, in parallel, nindividual learners incorporating the first extension, and to allow the nlearners to communicate their results. In most contexts, parallelism is only a means of improving efficiency. However, as shown herein, learners incorporating the first extension are more powerful than $\textbf{It}$-learners, and, collectivelearners resulting from the second extension increase in learning power as nincreases. Attention is paid to how one would actually implement a learner incorporating each extension. Parallelism is the underlying mechanism employed.