Communications of the ACM
Information Processing Letters
Optimization, approximation, and complexity classes
STOC '88 Proceedings of the twentieth annual ACM symposium on Theory of computing
Computational limitations on learning from examples
Journal of the ACM (JACM)
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Equivalence of models for polynomial learnability
COLT '88 Proceedings of the first annual workshop on Computational learning theory
On the learnability of finite automata
COLT '88 Proceedings of the first annual workshop on Computational learning theory
The Complexity of Near-Optimal Graph Coloring
Journal of the ACM (JACM)
Introduction To Automata Theory, Languages, And Computation
Introduction To Automata Theory, Languages, And Computation
Computers and Intractability: A Guide to the Theory of NP-Completeness
Computers and Intractability: A Guide to the Theory of NP-Completeness
The complexity of satisfiability problems
STOC '78 Proceedings of the tenth annual ACM symposium on Theory of computing
An application of the theory of computational complexity to the study of inductive inference.
An application of the theory of computational complexity to the study of inductive inference.
Inference of finite automata using homing sequences
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Crytographic limitations on learning Boolean formulae and finite automata
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Computational learning theory: survey and selected bibliography
STOC '92 Proceedings of the twenty-fourth annual ACM symposium on Theory of computing
Random DFA's can be approximately learned from sparse uniform examples
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Robust trainability of single neurons
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Cryptographic limitations on learning Boolean formulae and finite automata
Journal of the ACM (JACM)
Inference and minimization of hidden Markov chains
COLT '94 Proceedings of the seventh annual conference on Computational learning theory
Using computational learning strategies as a tool for combinatorial optimization
Annals of Mathematics and Artificial Intelligence
Applying Learning by Examples for Digital Design Automation
Applied Intelligence
Learning DFA from Simple Examples
Machine Learning
Partition-Refining Algorithms for Learning Finite State Automata
ISMIS '02 Proceedings of the 13th International Symposium on Foundations of Intelligent Systems
Model-carrying code: a practical approach for safe execution of untrusted applications
SOSP '03 Proceedings of the nineteenth ACM symposium on Operating systems principles
Javert: fully automatic mining of general temporal properties from dynamic traces
Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of software engineering
One-Clock Deterministic Timed Automata Are Efficiently Identifiable in the Limit
LATA '09 Proceedings of the 3rd International Conference on Language and Automata Theory and Applications
Learning dynamics: system identification for perceptually challenged agents
Artificial Intelligence
Inferring finite automata with stochastic output functions and an application to map learning
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
The efficiency of identifying timed automata and the power of clocks
Information and Computation
A randomised inference algorithm for regular tree languages
Natural Language Engineering
Regular inference as vertex coloring
ALT'12 Proceedings of the 23rd international conference on Algorithmic Learning Theory
Software model synthesis using satisfiability solvers
Empirical Software Engineering
Hi-index | 0.00 |
The minimum consistent DFA problem is that of finding a DFA with as few states as possible that is consistent with a given sample (a finite collection of words, each labeled as to whether the DFA found should accept or reject). Assuming that P ≠ NP, it is shown that for any constant k, no polynomial time algorithm can be guaranteed to find a consistent DFA of size optk, where opt is the size of a smallest DFA consistent with the sample. This result holds even if the alphabet is of constant size two, and if the algorithm is allowed to produce an NFA, a regular grammar, or a regular expression that is consistent with the sample. Similar hardness results are described for the problem of funding small consistent linear grammars.