An analysis of Bayesian classifiers

  • Authors:
  • Pat Langley;Wayne Iba, and;Kevin Thompson

  • Affiliations:
  • NASA Ames Research Center, Moffett Field, CA;NASA Ames Research Center, Moffett Field, CA and RECOM Technologies;NASA Ames Research Center, Moffett Field, CA and Sterling Software

  • Venue:
  • AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
  • Year:
  • 1992

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper we present an average-case analysis of the Bayesian classifier, a simple induction algorithm that fares remarkably well on many learning tasks. Our analysis assumes a monotone conjunctive target concept, and independent, noise-free Boolean attributes. We calculate the probability that the algorithm will induce an arbitrary pair of concept descriptions and then use this to compute the probability of correct classification over the instance space. The analysis takes into account the number of training instances, the number of attributes, the distribution of these attributes, and the level of class noise. We also explore the behavioral implications of the analysis by presenting predicted learning curves for artificial domains, and give experimental results on these domains as a check on our reasoning.