The minimum consistent DFA problem cannot be approximated within any polynomial
Journal of the ACM (JACM)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Machine Learning
The Role of Occam‘s Razor in Knowledge Discovery
Data Mining and Knowledge Discovery
Artificial Intelligence Review
ICGI '98 Proceedings of the 4th International Colloquium on Grammatical Inference
The EMILE 4.1 Grammar Induction Toolbox
ICGI '02 Proceedings of the 6th International Colloquium on Grammatical Inference: Algorithms and Applications
Unifying Computing and Cognition
Unifying Computing and Cognition
Using MDL for grammar induction
ICGI'06 Proceedings of the 8th international conference on Grammatical Inference: algorithms and applications
Learning deterministic DEC grammars is learning rational numbers
ICGI'06 Proceedings of the 8th international conference on Grammatical Inference: algorithms and applications
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
Kolmogorov's structure functions and model selection
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Hi-index | 0.00 |
In this paper I describe the general principles of learning as data compression. I introduce two-part code optimization and analyze the theoretical background in terms of Kolmogorov complexity. The good news is that the optimal compression theoretically represents the optimal interpretation of the data, the bad news is that such an optimal compression cannot be computed and that an increase in compression not necessarily implies a better theory. I discuss the application of these insights to DFA induction.