A protocol for performance evaluation of line detection algorithms
Machine Vision and Applications - Special issue on performance evaluation
Sparse Pixel Vectorization: An Algorithm and Its Performance Evaluation
IEEE Transactions on Pattern Analysis and Machine Intelligence
An Information-Theoretic Approach to Neural Computing
An Information-Theoretic Approach to Neural Computing
Increasing sensitivity of preterm birth by changing rule strengths
Pattern Recognition Letters - Special issue: Rough sets, pattern recognition and data mining
Maximum Entropy and Gaussian Models for Image Object Recognition
Proceedings of the 24th DAGM Symposium on Pattern Recognition
Support Vector Machines: Training and Applications
Support Vector Machines: Training and Applications
Improved Rooftop Detection in Aerial Images with Machine Learning
Machine Learning
A robust minimax approach to classification
The Journal of Machine Learning Research
Learning large margin classifiers locally and globally
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Pareto optimal linear classification
ICML '06 Proceedings of the 23rd international conference on Machine learning
Efficient training on biased minimax probability machine for imbalanced text classification
Proceedings of the 16th international conference on World Wide Web
Structured large margin machines: sensitive to data distributions
Machine Learning
A comparative study of Minimax Probability Machine-based approaches for face recognition
Pattern Recognition Letters
Kernel Maximum a Posteriori Classification with Error Bound Analysis
Neural Information Processing
Arbitrary norm support vector machines
Neural Computation
A novel kernel-based maximum a posteriori classification method
Neural Networks
Generative prior knowledge for discriminative classification
Journal of Artificial Intelligence Research
Sparse learning for support vector classification
Pattern Recognition Letters
Permutation test for incomplete paired data with application to cDNA microarray data
Computational Statistics & Data Analysis
ICONIP'06 Proceedings of the 13 international conference on Neural Information Processing - Volume Part I
Twin Mahalanobis distance-based support vector machines for pattern recognition
Information Sciences: an International Journal
Learning SVM with weighted maximum margin criterion for classification of imbalanced data
Mathematical and Computer Modelling: An International Journal
A minimax probabilistic approach to feature transformation for multi-class data
Applied Soft Computing
Conjugate relation between loss functions and uncertainty sets in classification problems
The Journal of Machine Learning Research
Hi-index | 0.00 |
We construct a distribution-free Bayes optimal classifier called the Minimum Error Minimax Probability Machine (MEMPM) in a worst-case setting, i.e., under all possible choices of class-conditional densities with a given mean and covariance matrix. By assuming no specific distributions for the data, our model is thus distinguished from traditional Bayes optimal approaches, where an assumption on the data distribution is a must. This model is extended from the Minimax Probability Machine (MPM), a recently-proposed novel classifier, and is demonstrated to be the general case of MPM. Moreover, it includes another special case named the Biased Minimax Probability Machine, which is appropriate for handling biased classification. One appealing feature of MEMPM is that it contains an explicit performance indicator, i.e., a lower bound on the worst-case accuracy, which is shown to be tighter than that of MPM. We provide conditions under which the worst-case Bayes optimal classifier converges to the Bayes optimal classifier. We demonstrate how to apply a more general statistical framework to estimate model input parameters robustly. We also show how to extend our model to nonlinear classification by exploiting kernelization techniques. A series of experiments on both synthetic data sets and real world benchmark data sets validates our proposition and demonstrates the effectiveness of our model.