Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
The FERET Evaluation Methodology for Face-Recognition Algorithms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Probability Estimates for Multi-class Classification by Pairwise Coupling
The Journal of Machine Learning Research
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Subclass Discriminant Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Principal Component Analysis Based on L1-Norm Maximization
IEEE Transactions on Pattern Analysis and Machine Intelligence
EuroGP'05 Proceedings of the 8th European conference on Genetic Programming
A comparison of methods for multiclass support vector machines
IEEE Transactions on Neural Networks
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
A complete fuzzy discriminant analysis approach for face recognition
Applied Soft Computing
MAP classifier with BDA features
FSKD'09 Proceedings of the 6th international conference on Fuzzy systems and knowledge discovery - Volume 1
Generalization of linear discriminant analysis using Lp-norm
Pattern Recognition Letters
Networks of transform-based evolvable features for object recognition
Proceedings of the 15th annual conference on Genetic and evolutionary computation
Generalized mean for feature extraction in one-class classification problems
Pattern Recognition
Hi-index | 0.01 |
In many one-class classification problems such as face detection and object verification, the conventional linear discriminant analysis sometimes fails because it makes an inappropriate assumption on negative samples that they are distributed according to a Gaussian distribution. In addition, it sometimes cannot extract sufficient number of features because it merely makes use of the mean value of each class. In order to resolve these problems, in this paper, we extend the biased discriminant analysis (BDA) which was originally developed for one-class classification problems. The BDA makes no assumption on the distribution of negative samples and tries to separate each negative sample as far away from the center of positive samples as possible. The first extension uses a saturation technique to suppress the influence of the samples which are located far away from the decision boundary. The second one utilizes the L1 norm instead of the L2 norm. Also we present a method to extend BDA and its variants to multi-class classification problems. Our approach is considered useful in the sense that without much complexity, it successfully reduces the negative effect of negative samples which are far away from the center of positive samples, resulting in better classification performances. We have applied the proposed methods to several classification problems and compared the performance with conventional methods.