On the monotonicity of the performance of Bayesian classifiers (Corresp.)

  • Authors:
  • W. Waller;A. Jain

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

Even with a finite set of training samples, the performance of a Bayesian classifier can not be degraded by increasing the number of features, as long as the old features are recoverable from the new features. This is true even for the general Bayesian classifiers investigated by qq Hughes, a result which contradicts previous interpretations of Hughes' model. The reasons for these difficulties are discussed. It would appear that the peaking behavior of practical classifiers is caused principally by their nonoptimal use of the features.