A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Neural networks and the bias/variance dilemma
Neural Computation
Neural Computation
A practical Bayesian framework for backpropagation networks
Neural Computation
Machine Learning
An introduction to variational methods for graphical models
Learning in graphical models
An introduction to support Vector Machines: and other kernel-based learning methods
An introduction to support Vector Machines: and other kernel-based learning methods
Machine Learning
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Bayesian parameter estimation via variational methods
Statistics and Computing
A Fast Dual Algorithm for Kernel Logistic Regression
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Greedy Training Algorithm for Sparse Least-Squares Support Vector Machines
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
A family of algorithms for approximate bayesian inference
A family of algorithms for approximate bayesian inference
Everything old is new again: a fresh look at historical approaches in machine learning
Everything old is new again: a fresh look at historical approaches in machine learning
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Efficient svm training using low-rank kernel representations
The Journal of Machine Learning Research
The evidence framework applied to classification networks
Neural Computation
Online Feature Selection Algorithm with Bayesian l 1 Regularization
PAKDD '09 Proceedings of the 13th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Hi-index | 0.01 |
In this paper we present a simple hierarchical Bayesian treatment of the sparse kernel logistic regression (KLR) model based on the evidence framework introduced by MacKay. The principal innovation lies in the re-parameterisation of the model such that the usual spherical Gaussian prior over the parameters in the kernel-induced feature space also corresponds to a spherical Gaussian prior over the transformed parameters, permitting the straight-forward derivation of an efficient update formula for the regularisation parameter. The Bayesian framework also allows the selection of good values for kernel parameters through maximisation of the marginal likelihood, or evidence, for the model. Results obtained on a variety of benchmark data sets are provided indicating that the Bayesian KLR model is competitive with KLR models, where the hyper-parameters are selected via cross-validation and with the support vector machine and relevance vector machine.