Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
IEEE Transactions on Pattern Analysis and Machine Intelligence
Expected classification error of the Fisher linear classifier with pseudo-inverse covariance matrix
Pattern Recognition Letters
Statistical Pattern Recognition: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
Statistical and neural classifiers: an integrated approach to design
Statistical and neural classifiers: an integrated approach to design
An approach to the evaluation of the performance of a discrete classifier
Pattern Recognition Letters
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Estimation of Dependences Based on Empirical Data: Springer Series in Statistics (Springer Series in Statistics)
Bayesian Model of Recognition on a Finite Set of Events
SETN '08 Proceedings of the 5th Hellenic conference on Artificial Intelligence: Theories, Models and Applications
CIARP'06 Proceedings of the 11th Iberoamerican conference on Progress in Pattern Recognition, Image Analysis and Applications
Hi-index | 0.10 |
In this paper, we study the probabilistic properties of pattern classifiers in discrete feature space. The principle of Bayesian averaging of recognition performance is used for this analysis. We consider two cases: (a) prior probabilities of classes are unknown, and (b) prior probabilities of classes are known. The misclassification probability is represented as a random value, for which the characteristic function (expressed via Kummer hypergeometric function) and absolute moments are analytically derived. For the case of unknown priors, an approximate formula for calculation of sufficient learning sample size is obtained. The comparison between the performances for two considered cases is made. As an example, we consider the problem of mutational hotspots classification in genetic sequences.