Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Machine Learning
Knowledge representation and inference in similarity networks and Bayesian multinets
Artificial Intelligence
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Machine Learning - Special issue on learning with probabilistic representations
Beating the hold-out: bounds for K-fold and progressive cross-validation
COLT '99 Proceedings of the twelfth annual conference on Computational learning theory
Introduction in inference in Bayesian networks
Learning in graphical models
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Toward Bayesian Classifiers with Accurate Probabilities
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Learning with mixtures of trees
The Journal of Machine Learning Research
Machine Learning: Discriminative and Generative (Kluwer International Series in Engineering and Computer Science)
Learning Bayesian network classifiers by maximizing conditional likelihood
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Discriminative versus generative parameter and structure learning of Bayesian network classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
Statistical Comparisons of Classifiers over Multiple Data Sets
The Journal of Machine Learning Research
Boosted Bayesian network classifiers
Machine Learning
On the classification performance of TAN and general Bayesian networks
Knowledge-Based Systems
Discriminative model selection for belief net structures
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
Bayesian network classifiers versus selective k-NN classifier
Pattern Recognition
Feature selection for Bayesian network classifiers using the MDL-FS score
International Journal of Approximate Reasoning
Efficient Heuristics for Discriminative Structure Learning of Bayesian Network Classifiers
The Journal of Machine Learning Research
An analysis of Bayesian classifiers
AAAI'92 Proceedings of the tenth national conference on Artificial intelligence
INVESTIGATION OF THE K2 ALGORITHM IN LEARNING BAYESIAN NETWORK CLASSIFIERS
Applied Artificial Intelligence
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Comparing Bayesian network classifiers
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
On supervised selection of Bayesian networks
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
UAI'00 Proceedings of the Sixteenth conference on Uncertainty in artificial intelligence
Induction of selective Bayesian classifiers
UAI'94 Proceedings of the Tenth international conference on Uncertainty in artificial intelligence
Bayesian class-matched multinet classifier
SSPR'06/SPR'06 Proceedings of the 2006 joint IAPR international conference on Structural, Syntactic, and Statistical Pattern Recognition
International Journal of Approximate Reasoning
Hi-index | 0.00 |
Bayesian networks (BNs) provide a powerful graphical model for encoding the probabilistic relationships among a set of variables, and hence can naturally be used for classification. However, Bayesian network classifiers (BNCs) learned in the common way using likelihood scores usually tend to achieve only mediocre classification accuracy because these scores are less specific to classification, but rather suit a general inference problem. We propose risk minimization by cross validation (RMCV) using the 0/1 loss function, which is a classification-oriented score for unrestricted BNCs. RMCV is an extension of classification-oriented scores commonly used in learning restricted BNCs and non-BN classifiers. Using small real and synthetic problems, allowing for learning all possible graphs, we empirically demonstrate RMCV superiority to marginal and class-conditional likelihood-based scores with respect to classification accuracy. Experiments using twenty-two real-world datasets show that BNCs learned using an RMCV-based algorithm significantly outperform the naive Bayesian classifier (NBC), tree augmented NBC (TAN), and other BNCs learned using marginal or conditional likelihood scores and are on par with non-BN state of the art classifiers, such as support vector machine, neural network, and classification tree. These experiments also show that an optimized version of RMCV is faster than all unrestricted BNCs and comparable with the neural network with respect to run-time. The main conclusion from our experiments is that unrestricted BNCs, when learned properly, can be a good alternative to restricted BNCs and traditional machine-learning classifiers with respect to both accuracy and efficiency.