On the Bayes fusion of visual features

  • Authors:
  • Xiaojin Shi;Roberto Manduchi

  • Affiliations:
  • Department of Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA 95064, USA;Department of Computer Engineering, University of California, Santa Cruz, Santa Cruz, CA 95064, USA

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the problem of image classification when more than one visual feature is available. In such cases, Bayes fusion offers an attractive solution by combining the results of different classifiers (one classifier per feature). This is the general form of the so-called ''naive Bayes'' approach. This paper compares the performance of Bayes fusion with respect to Bayesian classification, which is based the joint feature distribution. It is well-known that the latter has lower bias than the former, unless the features are conditionally independent, in which case the two coincide. However, as originally noted by Friedman, the low variance associated with naive Bayes estimation may mitigate the effect of its inherent bias. Indeed, in the case of small training samples, naive Bayes may outperform Bayes classification in terms of error rate. The contribution of this paper is threefold. First, we present a detailed analysis of the error rate of Bayes fusion assuming that the statistical description of the data is known. Second, we provide a qualitative justification of the small sample effect on the classifier's performance based on the bias/variance theory. Third, we present experimental results on three image data sets using color and texture features. Our experiments highlight the relationship between the error rate of the Bayes and the Bayes fusion classifiers as a function of the training sample size.