Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
A practical Bayesian framework for backpropagation networks
Neural Computation
A limited memory algorithm for bound constrained optimization
SIAM Journal on Scientific Computing
The nature of statistical learning theory
The nature of statistical learning theory
Bayesian Classification With Gaussian Processes
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Machine Learning
Bayesian Learning for Neural Networks
Bayesian Learning for Neural Networks
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A Tutorial on Support Vector Machines for Pattern Recognition
Data Mining and Knowledge Discovery
A Fast Dual Algorithm for Kernel Logistic Regression
ICML '02 Proceedings of the Nineteenth International Conference on Machine Learning
A Unified Framework for Regularization Networks and Support Vector Machines
A Unified Framework for Regularization Networks and Support Vector Machines
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
The evidence framework applied to support vector machines
IEEE Transactions on Neural Networks
Neural Networks - 2005 Special issue: IJCNN 2005
Regularity selection for effective 3D object reconstruction from a single line drawing
Pattern Recognition Letters
On Over-fitting in Model Selection and Subsequent Selection Bias in Performance Evaluation
The Journal of Machine Learning Research
SVM based MLP neural network algorithm and application in intrusion detection
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part III
Hi-index | 0.00 |
This letter describes Bayesian techniques for support vector classification. In particular, we propose a novel differentiable loss function, called the trigonometric loss function, which has the desirable characteristic of natural normalization in the likelihood function, and then follow standard gaussian processes techniques to set up a Bayesian framework. In this framework, Bayesian inference is used to implement model adaptation, while keeping the merits of support vector classifier, such as sparseness and convex programming. This differs from standard gaussian processes for classification. Moreover, we put forward class probability in making predictions. Experimental results on benchmark data sets indicate the usefulness of this approach.