Who is LB1? Discriminant analysis for the classification of specimens
Pattern Recognition
Biometric dispersion matcher versus LDA
Pattern Recognition
Robust Discriminant Analysis Based on Nonparametric Maximum Entropy
ACML '09 Proceedings of the 1st Asian Conference on Machine Learning: Advances in Machine Learning
A new and fast implementation for null space based linear discriminant analysis
Pattern Recognition
A new intrusion detection method based on antibody concentration
ICIC'09 Proceedings of the Intelligent computing 5th international conference on Emerging intelligent computing technology and applications
Linear discriminant projection embedding based on patches alignment
Image and Vision Computing
Distance metric learning by minimal distance maximization
Pattern Recognition
A linear discriminant analysis method based on mutual information maximization
Pattern Recognition
Using Clustering and Metric Learning to Improve Science Return of Remote Sensed Imagery
ACM Transactions on Intelligent Systems and Technology (TIST)
Heteroscedastic linear feature extraction based on sufficiency conditions
Pattern Recognition
The Journal of Machine Learning Research
Detection and classification of nano-scale particles with image histograms
Proceedings of the 27th Conference on Image and Vision Computing New Zealand
Adaptive discriminant learning for face recognition
Pattern Recognition
Exploiting fisher and fukunaga-koontz transforms in chernoff dimensionality reduction
ACM Transactions on Knowledge Discovery from Data (TKDD)
Generalized mean for feature extraction in one-class classification problems
Pattern Recognition
Single sample face recognition based on DCT and local gabor binary pattern histogram
ICIC'13 Proceedings of the 9th international conference on Intelligent Computing Theories
Integrated Fisher linear discriminants: An empirical study
Pattern Recognition
Hi-index | 0.14 |
We present an algorithm which provides the one-dimensional subspace where the Bayes error is minimized for the C class problem with homoscedastic Gaussian distributions. Our main result shows that the set of possible one-dimensional spaces v, for which the order of the projected class means is identical, defines a convex region with associated convex Bayes error function g(v). This allows for the minimization of the error function using standard convex optimization algorithms. Our algorithm is then extended to the minimization of the Bayes error in the more general case of heteroscedastic distributions. This is done by means of an appropriate kernel mapping function. This result is further extended to obtain the d-dimensional solution for any given d, by iteratively applying our algorithm to the null space of the (d 1)-dimensional solution. We also show how this result can be used to improve upon the outcomes provided by existing algorithms, and derive a low-computational cost, linear approximation. Extensive experimental validations are provided to demonstrate the use of these algorithms in classification, data analysis and visualization.