Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Equivalence and synthesis of causal models
UAI '90 Proceedings of the Sixth Annual Conference on Uncertainty in Artificial Intelligence
UAI '00 Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence
Eighteenth national conference on Artificial intelligence
Learning equivalence classes of bayesian-network structures
The Journal of Machine Learning Research
Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica ®
Computational Discrete Mathematics: Combinatorics and Graph Theory with Mathematica ®
Learning Bayesian network classifiers by maximizing conditional likelihood
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Large-Sample Learning of Bayesian Networks is NP-Hard
The Journal of Machine Learning Research
Discriminative versus generative parameter and structure learning of Bayesian network classifiers
ICML '05 Proceedings of the 22nd international conference on Machine learning
Full Bayesian network classifiers
ICML '06 Proceedings of the 23rd international conference on Machine learning
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
Elements of Information Theory (Wiley Series in Telecommunications and Signal Processing)
The Journal of Machine Learning Research
Discriminative parameter learning for Bayesian networks
Proceedings of the 25th international conference on Machine learning
Finding a path is harder than finding a tree
Journal of Artificial Intelligence Research
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
Efficient learning of Bayesian network classifiers: an extension to the TAN classifier
AI'07 Proceedings of the 20th Australian joint conference on Advances in artificial intelligence
Learning locally minimax optimal Bayesian networks
International Journal of Approximate Reasoning
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
A transformational characterization of equivalent Bayesian network structures
UAI'95 Proceedings of the Eleventh conference on Uncertainty in artificial intelligence
Comparison of score metrics for Bayesian network learning
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Learning attentive fusion of multiple bayesian network classifiers
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part III
Bandit-based structure learning for bayesian network classifiers
ICONIP'12 Proceedings of the 19th international conference on Neural Information Processing - Volume Part II
Alleviating naive Bayes attribute independence assumption by attribute weighting
The Journal of Machine Learning Research
International Journal of Approximate Reasoning
Hi-index | 0.00 |
We propose an efficient and parameter-free scoring criterion, the factorized conditional log-likelihood (f̂CLL), for learning Bayesian network classifiers. The proposed score is an approximation of the conditional log-likelihood criterion. The approximation is devised in order to guarantee decomposability over the network structure, as well as efficient estimation of the optimal parameters, achieving the same time and space complexity as the traditional log-likelihood scoring criterion. The resulting criterion has an information-theoretic interpretation based on interaction information, which exhibits its discriminative nature. To evaluate the performance of the proposed criterion, we present an empirical comparison with state-of-the-art classifiers. Results on a large suite of benchmark data sets from the UCI repository show that f̂CLL-trained classifiers achieve at least as good accuracy as the best compared classifiers, using significantly less computational resources.