Davenport-Schinzel sequences and their geometric applications
Davenport-Schinzel sequences and their geometric applications
A tutorial on support vector regression
Statistics and Computing
The Entire Regularization Path for the Support Vector Machine
The Journal of Machine Learning Research
Two-dimensional solution path for support vector regression
ICML '06 Proceedings of the 23rd international conference on Machine learning
Efficient Computation and Model Selection for the Support Vector Regression
Neural Computation
The Journal of Machine Learning Research
Nonparametric Quantile Estimation
The Journal of Machine Learning Research
Boosted Classification Trees and Class Probability/Quantile Estimation
The Journal of Machine Learning Research
High-quantile modeling for customer wallet estimation and other applications
Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining
Consistency of kernel-based quantile regression
Applied Stochastic Models in Business and Industry
Bi-level path following for cross validated solution of kernel quantile regression
Proceedings of the 25th international conference on Machine learning
GACV for quantile smoothing splines
Computational Statistics & Data Analysis
Hi-index | 0.00 |
We show how to follow the path of cross validated solutions to families of regularized optimization problems, defined by a combination of a parameterized loss function and a regularization term. A primary example is kernel quantile regression, where the parameter of the loss function is the quantile being estimated. Even though the bi-level optimization problem we encounter for every quantile is non-convex, the manner in which the optimal cross-validated solution evolves with the parameter of the loss function allows tracking of this solution. We prove this property, construct the resulting algorithm, and demonstrate it on real and artificial data. This algorithm allows us to efficiently solve the whole family of bi-level problems. We show how it can be extended to cover other modeling problems, like support vector regression, and alternative in-sample model selection approaches.