Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Machine Learning - Special issue on learning with probabilistic representations
Numerical computing with IEEE floating point arithmetic
Numerical computing with IEEE floating point arithmetic
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Sensitivity analysis in Bayesian networks: from single to multiple parameters
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
When do numbers really matter?
Journal of Artificial Intelligence Research
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Handbook of Floating-Point Arithmetic
Handbook of Floating-Point Arithmetic
Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers
The Journal of Machine Learning Research
Maximum Margin Bayesian Network Classifiers
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
Bayesian network classifiers (BNCs) are probabilistic classifiers showing good performance in many applications. They consist of a directed acyclic graph and a set of conditional probabilities associated with the nodes of the graph. These conditional probabilities are also referred to as parameters of the BNCs. According to common belief, these classifiers are insensitive to deviations of the conditional probabilities under certain conditions. The first condition is that these probabilities are not too extreme, i.e. not too close to 0 or 1. The second is that the posterior over the classes is significantly different. In this paper, we investigate the effect of precision reduction of the parameters on the classification performance of BNCs. The probabilities are either determined generatively or discriminatively. Discriminative probabilities are typically more extreme. However, our results indicate that BNCs with discriminatively optimized parameters are almost as robust to precision reduction as BNCs with generatively optimized parameters. Furthermore, even large precision reduction does not decrease classification performance significantly. Our results allow the implementation of BNCs with less computational complexity. This supports application in embedded systems using floating-point numbers with small bit-width. Reduced bit-widths further enable to represent BNCs in the integer domain while maintaining the classification performance.