Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
A Bayesian Approach to Joint Feature Selection and Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse Bayesian Learning for Efficient Visual Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hierarchic Bayesian models for kernel learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
3D human pose from silhouettes by relevance vector regression
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
Real-time adaptive hand motion recognition using a sparse bayesian classifier
ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
Local Feature Selection for the Relevance Vector Machine Using Adaptive Kernel Learning
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
An incremental Bayesian approach for training multilayer perceptrons
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Multiclass relevance vector machines: sparsity and accuracy
IEEE Transactions on Neural Networks
Value function approximation through sparse bayesian modeling
EWRL'11 Proceedings of the 9th European conference on Recent Advances in Reinforcement Learning
Hi-index | 0.00 |
Sparse kernel methods are very efficient in solving regression and classification problems. The sparsity and performance of these methods depend on selecting an appropriate kernel function, which is typically achieved using a cross-validation procedure. In this paper, we propose an incremental method for supervised learning, which is similar to the relevance vector machine (RVM) but also learns the parameters of the kernels during model training. Specifically, we learn different parameter values for each kernel, resulting in a very flexible model. In order to avoid over-fitting, we use a sparsity enforcing prior that controls the effective number of parameters of the model. We present experimental results on artificial data to demonstrate the advantages of the proposed method and we provide a comparison with the typical RVM on several commonly used regression and classification data sets.