Grammatical interface for even linear languages based on control sets
Information Processing Letters
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
On the Computational Complexity of Approximating Distributions by Probabilistic Automata
Machine Learning - Computational learning theory
Efficient noise-tolerant learning from statistical queries
STOC '93 Proceedings of the twenty-fifth annual ACM symposium on Theory of computing
Recent advances of grammatical inference
Theoretical Computer Science - Special issue on algorithmic learning theory
Theoretical Computer Science - Special issue on algorithmic learning theory
The String-to-String Correction Problem
Journal of the ACM (JACM)
Topology of strings: median string is NP-complete
Theoretical Computer Science
Synthesizing noise-tolerant language learners
Theoretical Computer Science
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
Learning Stochastic Regular Grammars by Means of a State Merging Method
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
On Approximately Identifying Concept Classes in the Limit
ALT '95 Proceedings of the 6th International Conference on Algorithmic Learning Theory
PAC-learnability of Probabilistic Deterministic Finite State Automata
The Journal of Machine Learning Research
A bibliographical study of grammatical inference
Pattern Recognition
Learning Balls of Strings with Correction Queries
ECML '07 Proceedings of the 18th European conference on Machine Learning
Learning Balls of Strings from Edit Corrections
The Journal of Machine Learning Research
Sequences classification by least general generalisations
ICGI'10 Proceedings of the 10th international colloquium conference on Grammatical inference: theoretical results and applications
Hi-index | 0.00 |
To study the problem of learning from noisy data, the common approach is to use a statistical model of noise. The influence of the noise is then considered according to pragmatic or statistical criteria, by using a paradigm taking into account a distribution of the data. In this article, we study the noise as a nonstatistical phenomenon, by defining the concept of systematic noise. We establish various ways of learning (in the limit) from noisy data. The first is based on a technique of reduction between problems and consists in learning from the data which one knows noisy, then in denoising the learned function. The second consists in denoising on the fly the training examples, thus to identify in the limit good examples, and then to learn from noncorrupted data. We give in both cases sufficient conditions so that learning is possible and we show through various examples (coming in particular from the field of the grammatical inference) that our techniques are complementary.