Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
Minimum distance to the complement of a convex set: duality result
Journal of Optimization Theory and Applications
Optimization: algorithms and consistent approximations
Optimization: algorithms and consistent approximations
Machine Learning
Duality and Geometry in SVM Classifiers
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
A robust minimax approach to classification
The Journal of Machine Learning Research
Second Order Cone Programming Formulations for Feature Selection
The Journal of Machine Learning Research
Training ν-Support Vector Classifiers: Theory and Algorithms
Neural Computation
Neural Computation
ν-support vector machine as conditional value-at-risk minimization
Proceedings of the 25th international conference on Machine learning
Dataset Shift in Machine Learning
Dataset Shift in Machine Learning
Robustness and Regularization of Support Vector Machines
The Journal of Machine Learning Research
A geometric approach to Support Vector Machine (SVM) classification
IEEE Transactions on Neural Networks
Conjugate relation between loss functions and uncertainty sets in classification problems
The Journal of Machine Learning Research
Hi-index | 0.00 |
A wide variety of machine learning algorithms such as the support vector machine SVM, minimax probability machine MPM, and Fisher discriminant analysis FDA exist for binary classification. The purpose of this letter is to provide a unified classification model that includes these models through a robust optimization approach. This unified model has several benefits. One is that the extensions and improvements intended for SVMs become applicable to MPM and FDA, and vice versa. For example, we can obtain nonconvex variants of MPM and FDA by mimicking Perez-Cruz, Weston, Hermann, and Schölkopf's 2003 extension from convex Î陆-SVM to nonconvex EÎ陆-SVM. Another benefit is to provide theoretical results concerning these learning methods at once by dealing with the unified model. We give a statistical interpretation of the unified classification model and prove that the model is a good approximation for the worst-case minimization of an expected loss with respect to the uncertain probability distribution. We also propose a nonconvex optimization algorithm that can be applied to nonconvex variants of existing learning methods and show promising numerical results.