Machine Learning
Making large-scale support vector machine learning practical
Advances in kernel methods
Semi-supervised support vector machines
Proceedings of the 1998 conference on Advances in neural information processing systems II
Support vector machines are universally consistent
Journal of Complexity
On the influence of the kernel on the consistency of support vector machines
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Kernel Methods for Pattern Analysis
Kernel Methods for Pattern Analysis
On Robustness Properties of Convex Risk Minimization Methods for Pattern Recognition
The Journal of Machine Learning Research
Trading convexity for scalability
ICML '06 Proceedings of the 23rd international conference on Machine learning
Optimization Techniques for Semi-Supervised Support Vector Machines
The Journal of Machine Learning Research
Evaluating Membership Functions for Fuzzy Discrete SVM
WILF '07 Proceedings of the 7th international workshop on Fuzzy Logic and Applications: Applications of Fuzzy Sets Theory
Support Vector Machines
Classification and Regression via Integer Optimization
Operations Research
Robust support vector machine training via convex outlier ablation
AAAI'06 Proceedings of the 21st national conference on Artificial intelligence - Volume 1
Softening the margin in discrete SVM
ICDM'07 Proceedings of the 7th industrial conference on Advances in data mining: theoretical aspects and applications
Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
Empirical risk minimization for support vector classifiers
IEEE Transactions on Neural Networks
Review: Supervised classification and mathematical optimization
Computers and Operations Research
A nested heuristic for parameter tuning in Support Vector Machines
Computers and Operations Research
Hi-index | 0.00 |
In the interest of deriving classifiers that are robust to outlier observations, we present integer programming formulations of Vapnik's support vector machine (SVM) with the ramp loss and hard margin loss. The ramp loss allows a maximum error of 2 for each training observation, while the hard margin loss calculates error by counting the number of training observations that are in the margin or misclassified outside of the margin. SVM with these loss functions is shown to be a consistent estimator when used with certain kernel functions. In computational studies with simulated and real-world data, SVM with the robust loss functions ignores outlier observations effectively, providing an advantage over SVM with the traditional hinge loss when using the linear kernel. Despite the fact that training SVM with the robust loss functions requires the solution of a quadratic mixed-integer program (QMIP) and is NP-hard, while traditional SVM requires only the solution of a continuous quadratic program (QP), we are able to find good solutions and prove optimality for instances with up to 500 observations. Solution methods are presented for the new formulations that improve computational performance over industry-standard integer programming solvers alone.