Machine Learning
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning
An Improved Learning Algorithm for Augmented Naive Bayes
PAKDD '01 Proceedings of the 5th Pacific-Asia Conference on Knowledge Discovery and Data Mining
Inference for the Generalization Error
Machine Learning
Learning Bayesian network classifiers by maximizing conditional likelihood
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
A Double Layer Bayesian Classifier
FSKD '07 Proceedings of the Fourth International Conference on Fuzzy Systems and Knowledge Discovery - Volume 01
Top 10 algorithms in data mining
Knowledge and Information Systems
Survey of Improving Naive Bayes for Classification
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
AUC: a statistically consistent and more discriminating measure than accuracy
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Semi-naive Exploitation of One-Dependence Estimators
ICDM '09 Proceedings of the 2009 Ninth IEEE International Conference on Data Mining
One dependence augmented naive bayes
ADMA'05 Proceedings of the First international conference on Advanced Data Mining and Applications
Approximating discrete probability distributions with dependence trees
IEEE Transactions on Information Theory
A Modified Short and Fukunaga Metric based on the attribute independence assumption
Pattern Recognition Letters
Not so greedy: Randomly Selected Naive Bayes
Expert Systems with Applications: An International Journal
Hi-index | 0.10 |
Many approaches attempt to improve naive Bayes and have been broadly divided into five main categories: (1) structure extension; (2) attribute weighting; (3) attribute selection; (4) instance weighting; (5) instance selection, also called local learning. In this paper, we work on the approach of structure extension and single out a random Bayes model by augmenting the structure of naive Bayes. We called it random one-dependence estimators, simply RODE. In RODE, each attribute has at most one parent from other attributes and this parent is randomly selected from log"2m (where m is the number of attributes) attributes with the maximal conditional mutual information. Our work conducts the randomness into Bayesian network classifiers. The experimental results on a large number of UCI data sets validate its effectiveness in terms of classification, class probability estimation, and ranking.