Small Sample Size Effects in Statistical Pattern Recognition: Recommendations for Practitioners
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural networks and the bias/variance dilemma
Neural Computation
Some inconsistencies and misidentified modeling assumptions in probabilistic information retrieval
ACM Transactions on Information Systems (TOIS)
Texture Features for Browsing and Retrieval of Image Data
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Computation
IEEE Transactions on Pattern Analysis and Machine Intelligence
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
IEEE Transactions on Pattern Analysis and Machine Intelligence
Experimental evaluation of expert fusion strategies
Pattern Recognition Letters - Special issue on pattern recognition in practice VI
Pattern Recognition and Neural Networks
Pattern Recognition and Neural Networks
On Bias, Variance, 0/1—Loss, and the Curse-of-Dimensionality
Data Mining and Knowledge Discovery
Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
ECML '98 Proceedings of the 10th European Conference on Machine Learning
A Unified Bias-Variance Decomposition for Zero-One and Squared Loss
Proceedings of the Seventeenth National Conference on Artificial Intelligence and Twelfth Conference on Innovative Applications of Artificial Intelligence
A Weighted Combination of Classifiers Employing Shared and Distinct Representations
CVPR '98 Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Invariant operators, small samples, and the bias-variance dilemma
CVPR'04 Proceedings of the 2004 IEEE computer society conference on Computer vision and pattern recognition
A multilevel information fusion approach for visual quality inspection
Information Fusion
Hi-index | 0.00 |
We consider the problem of image classification when more than one visual feature is available. In such cases, Bayes fusion offers an attractive solution by combining the results of different classifiers (one classifier per feature). This is the general form of the so-called ''naive Bayes'' approach. This paper compares the performance of Bayes fusion with respect to Bayesian classification, which is based the joint feature distribution. It is well-known that the latter has lower bias than the former, unless the features are conditionally independent, in which case the two coincide. However, as originally noted by Friedman, the low variance associated with naive Bayes estimation may mitigate the effect of its inherent bias. Indeed, in the case of small training samples, naive Bayes may outperform Bayes classification in terms of error rate. The contribution of this paper is threefold. First, we present a detailed analysis of the error rate of Bayes fusion assuming that the statistical description of the data is known. Second, we provide a qualitative justification of the small sample effect on the classifier's performance based on the bias/variance theory. Third, we present experimental results on three image data sets using color and texture features. Our experiments highlight the relationship between the error rate of the Bayes and the Bayes fusion classifiers as a function of the training sample size.