The minimum consistent DFA problem cannot be approximated within and polynomial
STOC '89 Proceedings of the twenty-first annual ACM symposium on Theory of computing
Recent advances of grammatical inference
Theoretical Computer Science - Special issue on algorithmic learning theory
Learning deterministic even linear languages from positive examples
Theoretical Computer Science - Special issue on algorithmic learning theory
Inference of Reversible Languages
Journal of the ACM (JACM)
On inferring linear single-tree languages
Information Processing Letters
On inferring zero-reversible languages
Acta Cybernetica
Introduction to Automata Theory, Languages and Computability
Introduction to Automata Theory, Languages and Computability
Forming Grammars for Structured Documents: an Application of Grammatical Inference
ICGI '94 Proceedings of the Second International Colloquium on Grammatical Inference and Applications
On the Synthesis of Finite-State Machines from Samples of Their Behavior
IEEE Transactions on Computers
Synoptic: summarizing system logs with refinement
SLAML'10 Proceedings of the 2010 workshop on Managing systems via log analysis and machine learning techniques
Leveraging existing instrumentation to automatically infer invariant-constrained models
Proceedings of the 19th ACM SIGSOFT symposium and the 13th European conference on Foundations of software engineering
Hi-index | 0.00 |
Regular language learning from positive examples alone is infeasible. Subclasses of regular languages, though, can be inferred from positive examples only. The most common approach for learning such is the specific-to-general technique of merging together either states of an initial finite state automaton or nonterminals in a regular grammar until convergence.In this paper we seek to unify some language learning approaches under the general-to-specific learning scheme. In automata terms it is implemented by refining the partition of the states of the automaton starting from one block until desired decomposition is obtained; i.e., until all blocks in the partition are uniform according to the predicate determining the properties required from the language.We develop a series of learning algorithms for well-known classes of regular languages as instantiations of the same master algorithm. Through block decomposition we are able to describe in the same scheme, e.g., the learning by rote approach of minimizing the number of states in the automaton and inference of k-reversible languages.Under the worst-case analysis partition-refinement is less efficient than alternative approaches. However, for many cases it turns out more efficient in practice. Moreover, it ensures the inference of the canonical automaton, whereas the state-merging approach will leave excessive states to the final automaton without a separate minimization step.