A robust minimax approach to classification
The Journal of Machine Learning Research
Cost-Sensitive Learning by Cost-Proportionate Example Weighting
ICDM '03 Proceedings of the Third IEEE International Conference on Data Mining
Convex Optimization
Cost-Guided Class Noise Handling for Effective Cost-Sensitive Learning
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
The Minimum Error Minimax Probability Machine
The Journal of Machine Learning Research
ROC confidence bands: an empirical evaluation
ICML '05 Proceedings of the 22nd international conference on Machine learning
Linear Asymmetric Classifier for cascade detectors
ICML '05 Proceedings of the 22nd international conference on Machine learning
Pareto optimal linear classification
ICML '06 Proceedings of the 23rd international conference on Machine learning
Pareto optimal linear classification
ICML '06 Proceedings of the 23rd international conference on Machine learning
Batch and online learning algorithms for nonconvex neyman-pearson classification
ACM Transactions on Intelligent Systems and Technology (TIST)
Permutation test for incomplete paired data with application to cDNA microarray data
Computational Statistics & Data Analysis
Asymmetric constraint optimization based adaptive boosting for cascade face detector
ICIC'11 Proceedings of the 7th international conference on Advanced Intelligent Computing Theories and Applications: with aspects of artificial intelligence
Hi-index | 0.00 |
We consider the problem of choosing a linear classifier that minimizes misclassification probabilities in two-class classification, which is a bi-criterion problem, involving a trade-off between two objectives. We assume that the class-conditional distributions are Gaussian. This assumption makes it computationally tractable to find Pareto optimal linear classifiers whose classification capabilities are inferior to no other linear ones. The main purpose of this paper is to establish several robustness properties of those classifiers with respect to variations and uncertainties in the distributions. We also extend the results to kernel-based classification. Finally, we show how to carry out trade-off analysis empirically with a finite number of given labeled data.