Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Pattern recognition: statistical, structural and neural approaches
Pattern recognition: statistical, structural and neural approaches
Statistical pattern recognition
Handbook of pattern recognition & computer vision
Linear Discriminant Analysis for Two Classes via Removal of Classification Structure
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
A Deterministic Annealing Approach for Parsimonious Design of Piecewise Regression Models
IEEE Transactions on Pattern Analysis and Machine Intelligence
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
Optimum Decision Rules in Pattern Recognition
SSPR '98/SPR '98 Proceedings of the Joint IAPR International Workshops on Advances in Pattern Recognition
Linear Classifiers in Perceptron Design
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume IV-Volume 7472 - Volume 7472
The d-Dimensional Normal Distribution Case
AI '01 Proceedings of the 14th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Hi-index | 0.00 |
When dealing with normally distributed classes, it is well known that the optimal discriminant function for two-classes is linear when the covariance matrices are equal. In this paper, we determine conditions for the optimal linear classifier when the covariance matrices are non-equal. In all the cases discussed here, the classifier is given by a pair of straight lines which is a particular case of the general equation of second degree. One of these cases is when we have two overlapping classes with equal means, which is a general case of the Minsky's Paradox for the Perceptron. Our results, which to our knowledge are the pioneering results for pairwise linear classifiers, yield a general linear classifier for this particular case, which can be obtained directly from the parameters of the distribution. Numerous other analytic results for two and d-dimensional normal vectors have been derived. Finally, we have also provided some empirical results in all the cases, and demonstrated that these linear classifiers achieve very good performance.