Ten lectures on wavelets
Matrix computations (3rd ed.)
Numerical Recipes in C: The Art of Scientific Computing
Numerical Recipes in C: The Art of Scientific Computing
Sparse Greedy Matrix Approximation for Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Variational Relevance Vector Machines
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Adaptive Sparseness for Supervised Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
The Bayesian backfitting relevance vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Hierarchic Bayesian models for kernel learning
ICML '05 Proceedings of the 22nd international conference on Machine learning
A mathematical analysis of the DCT coefficient distributions for images
IEEE Transactions on Image Processing
A Sparse Regression Mixture Model for Clustering Time-Series
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
Incremental Relevance Vector Machine with Kernel Learning
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
Adaptive spherical Gaussian kernel in sparse Bayesian learning framework for nonlinear regression
Expert Systems with Applications: An International Journal
Sparse Bayesian modeling with adaptive kernel learning
IEEE Transactions on Neural Networks
Local Feature Selection for the Relevance Vector Machine Using Adaptive Kernel Learning
ICANN '09 Proceedings of the 19th International Conference on Artificial Neural Networks: Part I
Expert Systems with Applications: An International Journal
An incremental Bayesian approach for training multilayer perceptrons
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part I
Multiclass relevance vector machines: sparsity and accuracy
IEEE Transactions on Neural Networks
Kernel machines for epilepsy diagnosis via EEG signal classification: A comparative study
Artificial Intelligence in Medicine
Hi-index | 0.00 |
Enforcing sparsity constraints has been shown to be an effective and efficient way to obtain state-of-the-art results in regression and classification tasks. Unlike the support vector machine (SVM) the relevance vector machine (RVM) explicitly encodes the criterion of model sparsity as a prior over the model weights. However the lack of an explicit prior structure over the weight variances means that the degree of sparsity is to a large extent controlled by the choice of kernel (and kernel parameters). This can lead to severe overfitting or oversmoothing--possibly even both at the same time (e.g. for the multiscale Doppler data). We detail an efficient scheme to control sparsity in Bayesian regression by incorporating a flexible noise-dependent smoothness prior into the RVM. We present an empirical evaluation of the effects of choice of prior structure on a selection of popular data sets and elucidate the link between Bayesian wavelet shrinkage and RVM regression. Our model encompasses the original RVM as a special case, but our empirical results show that we can surpass RVM performance in terms of goodness of fit and achieved sparsity as well as computational performance in many cases. The code is freely available.