Robust regression and outlier detection
Robust regression and outlier detection
A training algorithm for optimal margin classifiers
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Training with noise is equivalent to Tikhonov regularization
Neural Computation
Machine Learning
An Experimental and Theoretical Comparison of Model SelectionMethods
Machine Learning - Special issue on the eighth annual conference on computational learning theory, (COLT '95)
Robust Solutions to Least-Squares Problems with Uncertain Data
SIAM Journal on Matrix Analysis and Applications
Learning in Neural Networks: Theoretical Foundations
Learning in Neural Networks: Theoretical Foundations
The Journal of Machine Learning Research
A robust minimax approach to classification
The Journal of Machine Learning Research
Rademacher and gaussian complexities: risk bounds and structural results
The Journal of Machine Learning Research
Operations Research
Convex Optimization
On Robustness Properties of Convex Risk Minimization Methods for Pattern Recognition
The Journal of Machine Learning Research
Nightmare at test time: robust learning by feature deletion
ICML '06 Proceedings of the 23rd international conference on Machine learning
Second Order Cone Programming Approaches for Handling Missing and Uncertain Data
The Journal of Machine Learning Research
Bouligand Derivatives and Robustness of Support Vector Machines for Regression
The Journal of Machine Learning Research
Robust support vector machines for classification and computational issues
Optimization Methods & Software - Systems Analysis, Optimization and Data Mining in Biomedicine
Support Vector Machines
Probability: Theory and Examples
Probability: Theory and Examples
Almost-everywhere algorithmic stability and generalization error
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Consistency of support vector machines and other regularized kernel classifiers
IEEE Transactions on Information Theory
Robust solutions of uncertain linear programs
Operations Research Letters
IEEE Transactions on Information Theory
Robust 1-norm soft margin smooth support vector machine
IDEAL'10 Proceedings of the 11th international conference on Intelligent data engineering and automated learning
A novel robust kernel for visual learning problems
Neurocomputing
Understanding the risk factors of learning in adversarial environments
Proceedings of the 4th ACM workshop on Security and artificial intelligence
Theory and Applications of Robust Optimization
SIAM Review
A Distributional Interpretation of Robust Optimization
Mathematics of Operations Research
Robust twin support vector machine for pattern classification
Pattern Recognition
Fuzzy one-class classification model using contamination neighborhoods
Advances in Fuzzy Systems
A unified classification model based on robust optimization
Neural Computation
Robust novelty detection in the framework of a contamination neighbourhood
International Journal of Intelligent Information and Database Systems
Robust novelty detection in the framework of a contamination neighbourhood
International Journal of Intelligent Information and Database Systems
Multi class learning with individual sparsity
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Machine learning with operational costs
The Journal of Machine Learning Research
The Journal of Machine Learning Research
Hi-index | 0.06 |
We consider regularized support vector machines (SVMs) and show that they are precisely equivalent to a new robust optimization formulation. We show that this equivalence of robust optimization and regularization has implications for both algorithms, and analysis. In terms of algorithms, the equivalence suggests more general SVM-like algorithms for classification that explicitly build in protection to noise, and at the same time control overfitting. On the analysis front, the equivalence of robustness and regularization provides a robust optimization interpretation for the success of regularized SVMs. We use this new robustness interpretation of SVMs to give a new proof of consistency of (kernelized) SVMs, thus establishing robustness as the reason regularized SVMs generalize well.