Bayesian network classifiers with reduced precision parameters

  • Authors:
  • Sebastian Tschiatschek;Peter Reinprecht;Manfred Mücke;Franz Pernkopf

  • Affiliations:
  • Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria;Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria;Research Group Theory and Applications of Algorithms, University of Vienna, Vienna, Austria, Sustainable Computing Research, Austria;Signal Processing and Speech Communication Laboratory, Graz University of Technology, Graz, Austria

  • Venue:
  • ECML PKDD'12 Proceedings of the 2012 European conference on Machine Learning and Knowledge Discovery in Databases - Volume Part I
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Bayesian network classifiers (BNCs) are probabilistic classifiers showing good performance in many applications. They consist of a directed acyclic graph and a set of conditional probabilities associated with the nodes of the graph. These conditional probabilities are also referred to as parameters of the BNCs. According to common belief, these classifiers are insensitive to deviations of the conditional probabilities under certain conditions. The first condition is that these probabilities are not too extreme, i.e. not too close to 0 or 1. The second is that the posterior over the classes is significantly different. In this paper, we investigate the effect of precision reduction of the parameters on the classification performance of BNCs. The probabilities are either determined generatively or discriminatively. Discriminative probabilities are typically more extreme. However, our results indicate that BNCs with discriminatively optimized parameters are almost as robust to precision reduction as BNCs with generatively optimized parameters. Furthermore, even large precision reduction does not decrease classification performance significantly. Our results allow the implementation of BNCs with less computational complexity. This supports application in embedded systems using floating-point numbers with small bit-width. Reduced bit-widths further enable to represent BNCs in the integer domain while maintaining the classification performance.