An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Support Vector Machines and the Bayes Rule in Classification
Data Mining and Knowledge Discovery
Feature Selection via Concave Minimization and Support Vector Machines
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Nonparametric Quantile Estimation
The Journal of Machine Learning Research
Backpropagation applied to handwritten zip code recognition
Neural Computation
Bi-level path following for cross validated solution of kernel quantile regression
Proceedings of the 25th international conference on Machine learning
Support Vector Machines
Penalized regression with correlation-based penalty
Statistics and Computing
Model uncertainty and variable selection in Bayesian lasso regression
Statistics and Computing
Hi-index | 0.00 |
In view of its ongoing importance for a variety of practical applications, feature selection via 驴 1-regularization methods like the lasso has been subject to extensive theoretical as well empirical investigations. Despite its popularity, mere 驴 1-regularization has been criticized for being inadequate or ineffective, notably in situations in which additional structural knowledge about the predictors should be taken into account. This has stimulated the development of either systematically different regularization methods or double regularization approaches which combine 驴 1-regularization with a second kind of regularization designed to capture additional problem-specific structure. One instance thereof is the `structured elastic net', a generalization of the proposal in Zou and Hastie (J. R. Stat. Soc. Ser. B 67:301---320, 2005), studied in Slawski et al. (Ann. Appl. Stat. 4(2):1056---1080, 2010) for the class of generalized linear models.In this paper, we elaborate on the structured elastic net regularizer in conjunction with two important loss functions, the check loss of quantile regression and the hinge loss of support vector classification. Solution paths algorithms are developed which compute the whole range of solutions as one regularization parameter varies and the second one is kept fixed.The methodology and practical performance of our approach is illustrated by means of case studies from image classification and climate science.