The nature of statistical learning theory
The nature of statistical learning theory
Outcomes of the equivalence of adaptive ridge with least absolute shrinkage
Proceedings of the 1998 conference on Advances in neural information processing systems II
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
Core Vector Machines: Fast SVM Training on Very Large Data Sets
The Journal of Machine Learning Research
Neural Computation
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Multiple kernel support vector regression for siRNA efficacy prediction
ISBRA'08 Proceedings of the 4th international conference on Bioinformatics research and applications
A multiple-kernel support vector regression approach for stock market price forecasting
Expert Systems with Applications: An International Journal
Localized Multiple Kernel Regression
ICPR '10 Proceedings of the 2010 20th International Conference on Pattern Recognition
Multiple Kernel Learning for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
lp-Norm Multiple Kernel Learning
The Journal of Machine Learning Research
Multiple Kernel Learning Algorithms
The Journal of Machine Learning Research
A survey of the state of the art in learning the kernels
Knowledge and Information Systems
Efficient Sparse Generalized Multiple Kernel Learning
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
A frequent problem in support vector regression is to select appropriate features or parameters. We present an efficient feature selection method for regression problem where optimal kernel weights and model parameters are learned alternatively. Our approach generalizes v support vector regression and can be formulized as quadratic constrained quadratic programming which can be efficiently solved by level method. Moreover, we introduce an elastic-net-type constrain on the kernel weights. It finds the best trade-off sparsity and accuracy. Our algorithm keeps the useful information and discards redundant information; meanwhile it has the similar properties of v parameter. The experimental evaluation of the proposed algorithm on synthetic dataset and stock marketing price forecasting task show that our method can select suitable features for building model and attain competitive performance.