An equivalence between sparse approximation and support vector machines
Neural Computation
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Non-parametric regression with wavelet kernels: Research Articles
Applied Stochastic Models in Business and Industry - Statistical Learning
ECML'05 Proceedings of the 16th European conference on Machine Learning
ISNN '07 Proceedings of the 4th international symposium on Neural Networks: Advances in Neural Networks
Hybrid wavelet model construction using orthogonal forward selection with boosting search
International Journal of Business Intelligence and Data Mining
Sparse learning for support vector classification
Pattern Recognition Letters
Hi-index | 0.01 |
The solution of multi-scale support vector regression (MS-SVR) with the quadratic loss function can be obtained by solving a time-consuming quadratic programming (QP) problem and a post-processing. This paper adapts an expectation-maximization (EM) algorithm based on two 2-level hierarchical-Bayes models, which implement the l"1-norm and the l"0-norm regularization term asymptotically, to fast train MS-SVR. Experimental results illuminate that the EM algorithm is faster than the QP algorithm for large data sets, the l"0-norm regularization term promotes a far sparser solution than the l"1-norm, and the good performance of MS-SVR should be attributed to the multi-scale kernels and the regularization terms.