Robust transmission of unbounded strings using Fibonacci representations
IEEE Transactions on Information Theory
Model selection based on minimum description length
Journal of Mathematical Psychology
The Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions
COLT '02 Proceedings of the 15th Annual Conference on Computational Learning Theory
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Feature Extraction: Foundations and Applications (Studies in Fuzziness and Soft Computing)
Bolasso: model consistent Lasso estimation through the bootstrap
Proceedings of the 25th international conference on Machine learning
Minimum description length induction, Bayesianism, and Kolmogorov complexity
IEEE Transactions on Information Theory
Hi-index | 0.00 |
We introduce a linear regression regularization method based on the minimum description length principle, which aims at both sparsification and over-fit avoidance. We begin by building compact prefix free encryption codes for both rational-valued parameters and integer-valued residuals, then build smooth approximations to their code lengths, as to provide an objective function whose minimization provides optimal lossless compression under certain assumptions. We compare the method against the LASSO on simulated datasets proposed by Tibshirani [14], examining generalization and accuracy in sparsity structure recovery.