Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning the Kernel Matrix with Semidefinite Programming
The Journal of Machine Learning Research
A Bayesian Approach to Joint Feature Selection and Classifier Design
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hierarchic Bayesian models for kernel learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Large Scale Multiple Kernel Learning
The Journal of Machine Learning Research
Hi-index | 0.00 |
Recently, sparse kernel methods such as the Relevance Vector Machine (RVM) have become very popular for solving regression problems. The sparsity and performance of these methods depend on selecting an appropriate kernel function, which is typically achieved using a cross-validation procedure. In this paper we propose a modification to the incremental RVM learning method, that also learns the location and scale parameters of Gaussian kernels during model training. More specifically, in order to effectively model signals with different characteristics at various locations, we learn different parameter values for each kernel, resulting in a very flexible model. In order to avoid overfitting we use a sparsity enforcing prior that controls the effective number of parameters of the model. Finally, we apply the proposed method to one-dimensional and two-dimensional artificial signals, and evaluate its performance on two real-world datasets.