Communications of the ACM
Information Processing Letters
Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
A general lower bound on the number of examples needed for learning
Information and Computation
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
The Strength of Weak Learnability
Machine Learning
On the necessity of Occam algorithms
STOC '90 Proceedings of the twenty-second annual ACM symposium on Theory of computing
Computational learning theory: an introduction
Computational learning theory: an introduction
Linear approximation of shortest superstrings
Journal of the ACM (JACM)
Predicting {0, 1}-functions on randomly drawn points
Information and Computation
Journal of Computer and System Sciences
An introduction to Kolmogorov complexity and its applications (2nd ed.)
An introduction to Kolmogorov complexity and its applications (2nd ed.)
Towards Representation Independence in PAC Learning
AII '89 Proceedings of the International Workshop on Analogical and Inductive Inference
Theoretical Computer Science
Estimating relatedness via data compression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Biological information as set-based complexity
IEEE Transactions on Information Theory - Special issue on information theory in molecular biology and neuroscience
The classification game: complexity regularization through interaction
COIN'09 Proceedings of the 5th international conference on Coordination, organizations, institutions, and norms in agent systems
Hi-index | 0.89 |
We provide a new representation-independent formulation of Occam's razor theorem, based on Kolmogorov complexity. This new formulation allows us to: (i) obtain better sample complexity than both length-based [Blumer et al., Inform. Process. Lett. 24 (1987) 377-380] and VC-based [Blumer et al., J. ACM 35 (1989) 929-965] versions of Occam's razor theorem, in many applications; and (ii) achieve a sharper reverse of Occam's razor theorem than that of Board and Pitt [STOC, 1999, pp. 54-63]. Specifically, we weaken the assumptions made by Board and Pitt [STOC, 1999, pp. 54-63] and extend the reverse to superpolynomial running times.