A decision-theoretic generalization of on-line learning and an application to boosting
Journal of Computer and System Sciences - Special issue: 26th annual ACM symposium on the theory of computing & STOC'94, May 23–25, 1994, and second annual Europe an conference on computational learning theory (EuroCOLT'95), March 13–15, 1995
Covering number bounds of certain regularized linear function classes
The Journal of Machine Learning Research
Multiple kernel learning, conic duality, and the SMO algorithm
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Boosting as a Regularized Path to a Maximum Margin Classifier
The Journal of Machine Learning Research
Support Vector Machinery for Infinite Ensemble Learning
The Journal of Machine Learning Research
An l 1 Regularization Framework for Optimal Rule Combination
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Learning with infinitely many features
Machine Learning
Feature engineering for semantic place prediction
Pervasive and Mobile Computing
Hi-index | 0.00 |
In this paper we discuss the problem of fitting l1 regularized prediction models in infinite (possibly non-countable) dimensional feature spaces. Our main contributions are: a. Deriving a generalization of l1 regularization based on measures which can be applied in non-countable feature spaces; b. Proving that the sparsity property of l1 regularization is maintained in infinite dimensions; c. Devising a path-following algorithm that can generate the set of regularized solutions in "nice" feature spaces; and d. Presenting an example of penalized spline models where this path following algorithm is computationally feasible, and gives encouraging empirical results.