Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine Learning
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Sparse on-line Gaussian processes
Neural Computation
A family of algorithms for approximate bayesian inference
A family of algorithms for approximate bayesian inference
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Learning to learn with the informative vector machine
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Healing the relevance vector machine through augmentation
ICML '05 Proceedings of the 22nd international conference on Machine learning
Gaussian Processes for Classification: Mean-Field Algorithms
Neural Computation
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Gaussian Processes for Machine Learning (Adaptive Computation and Machine Learning)
Bayesian Gaussian Process Classification with the EM-EP Algorithm
IEEE Transactions on Pattern Analysis and Machine Intelligence
Assessing Approximate Inference for Binary Gaussian Process Classification
The Journal of Machine Learning Research
The evidence framework applied to classification networks
Neural Computation
ICIC'06 Proceedings of the 2006 international conference on Intelligent Computing - Volume Part I
Variational Gaussian process classifiers
IEEE Transactions on Neural Networks
Validation based sparse gaussian processes for ordinal regression
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Hi-index | 0.00 |
Gaussian processes (GPs) are promising Bayesian methods for classification and regression problems. Design of a GP classifier and making predictions using it is, however, computationally demanding, especially when the training set size is large. Sparse GP classifiers are known to overcome this limitation. In this letter, we propose and study a validation-based method for sparse GP classifier design. The proposed method uses a negative log predictive (NLP) loss measure, which is easy to compute for GP models. We use this measure for both basis vector selection and hyperparameter adaptation. The experimental results on several real-world benchmark data sets show better or comparable generalization performance over existing methods.