Elements of information theory
Elements of information theory
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Fundamentals of speech recognition
Fundamentals of speech recognition
Regularization theory and neural networks architectures
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Neural Network-Based Face Detection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Support Vector Machines for 3D Object Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Properties of support vector machines
Neural Computation
Advances in kernel methods: support vector learning
Advances in kernel methods: support vector learning
Support vector machines, reproducing kernel Hilbert spaces, and randomized GACV
Advances in kernel methods
Making large-scale support vector machine learning practical
Advances in kernel methods
Fast training of support vector machines using sequential minimal optimization
Advances in kernel methods
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond
Forward Decoding Kernel Machines: A Hybrid HMM/SVM Approach to Sequence Recognition
SVM '02 Proceedings of the First International Workshop on Pattern Recognition with Support Vector Machines
On the Learnability and Design of Output Codes for Multiclass Problems
COLT '00 Proceedings of the Thirteenth Annual Conference on Computational Learning Theory
Training Support Vector Machines: an Application to Face Detection
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Pedestrian Detection Using Wavelet Templates
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Discriminative, generative and imitative learning
Discriminative, generative and imitative learning
Reducing multiclass to binary: a unifying approach for margin classifiers
The Journal of Machine Learning Research
Sparse bayesian learning and the relevance vector machine
The Journal of Machine Learning Research
Improvements to Platt's SMO Algorithm for SVM Classifier Design
Neural Computation
Speaker identification via support vector classifiers
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 01
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Moderating the outputs of support vector machine classifiers
IEEE Transactions on Neural Networks
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Sparse probabilistic classifiers
Proceedings of the 24th international conference on Machine learning
Probabilistic support vector machines for classification of noise affected data
Information Sciences: an International Journal
Hi-index | 0.00 |
Many classification tasks require estimation of output class probabilities for use as confidence scores or for inference integrated with other models. Probability estimates derived from large margin classifiers such as support vector machines (SVMs) are often unreliable. We extend SVM large margin classification to GiniSVM maximum entropy multi-class probability regression. GiniSVM combines a quadratic (Gini-Simpson) entropy based agnostic model with a kernel based similarity model. A form of Huber loss in the GiniSVM primal formulation elucidates a connection to robust estimation, further corroborated by the impulsive noise filtering property of the reverse water-filling procedure to arrive at normalized classification margins. The GiniSVM normalized classification margins directly provide estimates of class conditional probabilities, approximating kernel logistic regression (KLR) but at reduced computational cost. As with other SVMs, GiniSVM produces a sparse kernel expansion and is trained by solving a quadratic program under linear constraints. GiniSVM training is efficiently implemented by sequential minimum optimization or by growth transformation on probability functions. Results on synthetic and benchmark data, including speaker verification and face detection data, show improved classification performance and increased tolerance to imprecision over soft-margin SVM and KLR.