Communications of the ACM
Learning regular sets from queries and counterexamples
Information and Computation
The minimum consistent DFA problem cannot be approximated within and polynomial
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Learning simple concepts under simple distributions
SIAM Journal on Computing
Random DFA's can be approximately learned from sparse uniform examples
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
A computational model of teaching
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
COLT '93 Proceedings of the sixth annual conference on Computational learning theory
Inference of finite automata using homing sequences
Information and Computation
Journal of Computer and System Sciences
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Machine Learning
Machine Learning
Simple DFA are Polynomially Probably Exactly Learnable from Simple Examples
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
What Is the Search Space of the Regular Inference?
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
Characteristic sets for polynominal grammatical inference
ICG! '96 Proceedings of the 3rd International Colloquium on Grammatical Inference: Learning Syntax from Sentences
ICG! '96 Proceedings of the 3rd International Colloquium on Grammatical Inference: Learning Syntax from Sentences
PAC Learning with Simple Examples
STACS '96 Proceedings of the 13th Annual Symposium on Theoretical Aspects of Computer Science
Inductive Inference, DFAs, and Computational Complexity
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
PAC Learning under Helpful Distributions
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Learning DFA from Simple Examples
ALT '97 Proceedings of the 8th International Conference on Algorithmic Learning Theory
Inferring Deterministic Linear Languages
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
PAC-learnability of Probabilistic Deterministic Finite State Automata
The Journal of Machine Learning Research
Mining process models with non-free-choice constructs
Data Mining and Knowledge Discovery
Precise analysis of string expressions
SAS'03 Proceedings of the 10th international conference on Static analysis
A memetic grammar inference algorithm for language learning
Applied Soft Computing
Learning the grammar of distant change in the world-wide web
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
On the learnability of shuffle ideals
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Learning stochastic timed automata from sample executions
ISoLA'12 Proceedings of the 5th international conference on Leveraging Applications of Formal Methods, Verification and Validation: technologies for mastering change - Volume Part I
On the learnability of shuffle ideals
The Journal of Machine Learning Research
Hi-index | 0.00 |
Efficient learning of DFA is a challenging research problem in grammatical inference. It is known that both exact and approximate (in the PAC sense) identifiability of DFA is hard. Pitt has posed the following open research problem: “Are DFA PAC-identifiable if examples are drawn from the uniform distribution, or some other known simple distribution?” (Pitt, in Lecture Notes in Artificial Intelligence, 397, pp. 18–44, Springer-Verlag, 1989). We demonstrate that the class of DFA whose canonical representations have logarithmic Kolmogorov complexity is efficiently PAC learnable under the Solomonoff Levin universal distribution (m). We prove that the class of DFA is efficiently learnable under the PACS (PAC learning with simple examples) model (Denis, D'Halluin & Gilleron, STACS'96—Proceedings of the 13th Annual Symposium on the Theoretical Aspects of Computer Science, pp. 231–242, 1996) wherein positive and negative examples are sampled according to the universal distribution conditional on a description of the target concept. Further, we show that any concept that is learnable under Gold's model of learning from characteristic samples, Goldman and Mathias' polynomial teachability model, and the model of learning from example based queries is also learnable under the PACS model.