C4.5: programs for machine learning
C4.5: programs for machine learning
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
Hierarchical Latent Class Models for Cluster Analysis
The Journal of Machine Learning Research
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
Efficient lazy elimination for averaged one-dependence estimators
ICML '06 Proceedings of the 23rd international conference on Machine learning
Automatic web pages categorization with ReliefF and Hidden Naive Bayes
Proceedings of the 2007 ACM symposium on Applied computing
Effects of highly agreed documents in relevancy prediction
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
IEEE Transactions on Knowledge and Data Engineering
Discriminatively Learning Selective Averaged One-Dependence Estimators Based on Cross-Entropy Method
Computational Intelligence and Security
Survey of Improving Naive Bayes for Classification
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Recognising Human Emotions from Body Movement and Gesture Dynamics
ACII '07 Proceedings of the 2nd international conference on Affective Computing and Intelligent Interaction
A Combined Classification Algorithm Based on C4.5 and NB
ISICA '08 Proceedings of the 3rd International Symposium on Advances in Computation and Intelligence
Instance Selection by Border Sampling in Multi-class Domains
ADMA '09 Proceedings of the 5th International Conference on Advanced Data Mining and Applications
RADAR: a personal assistant that learns to reduce email overload
AAAI'08 Proceedings of the 23rd national conference on Artificial intelligence - Volume 3
An Empirical Study on Several Classification Algorithms and Their Improvements
ISICA '09 Proceedings of the 4th International Symposium on Advances in Computation and Intelligence
Scaling Up the Accuracy of Bayesian Network Classifiers by M-Estimate
ICIC '07 Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
Evaluating query-independent object features for relevancy prediction
ECIR'07 Proceedings of the 29th European conference on IR research
Classification of feedback expressions in multimodal data
ACLShort '10 Proceedings of the ACL 2010 Conference Short Papers
The Knowledge Engineering Review
Random one-dependence estimators
Pattern Recognition Letters
Identifying important action primitives for high level activity recognition
EuroSSC'10 Proceedings of the 5th European conference on Smart sensing and context
Cascading customized naïve bayes couple
AI'10 Proceedings of the 23rd Canadian conference on Advances in Artificial Intelligence
A general MCMC method for Bayesian inference in logic-based probabilistic modeling
IJCAI'11 Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Two
Random multiclass classification: generalizing random forests to random MNL and random NB
DEXA'07 Proceedings of the 18th international conference on Database and Expert Systems Applications
Automated feature weighting in naive bayes for high-dimensional data classification
Proceedings of the 21st ACM international conference on Information and knowledge management
Indoor location prediction using multiple wireless received signal strengths
AusDM '08 Proceedings of the 7th Australasian Data Mining Conference - Volume 87
Ensemble selection for feature-based classification of diabetic maculopathy images
Computers in Biology and Medicine
Learning attribute weighted AODE for ROC area ranking
International Journal of Information and Communication Technology
Hi-index | 0.00 |
The conditional independence assumption of naive Bayes essentially ignores attribute dependencies and is often violated. On the other hand, although a Bayesian network can represent arbitrary attribute dependencies, learning an optimal Bayesian network from data is intractable. The main reason is that learning the optimal structure of a Bayesian network is extremely time consuming. Thus, a Bayesian model without structure learning is desirable. In this paper, we propose a novel model, called hidden naive Bayes (HNB). In an HNB, a hidden parent is created for each attribute which combines the influences from all other attributes. We present an approach to creating hidden parents using the average of weighted one-dependence estimators. HNB inherits the structural simplicity of naive Bayes and can be easily learned without structure learning. We propose an algorithm for learning HNB based on conditional mutual information. We experimentally test HNB in terms of classification accuracy, using the 36 UCI data sets recommended by Weka (Witten & Frank 2000), and compare it to naive Bayes (Langley, Iba, & Thomas 1992), C4.5 (Quinlan 1993), SBC (Langley & Sage 1994), NBTree (Kohavi 1996), CL-TAN (Friedman, Geiger, & Goldszmidt 1997), and AODE (Webb, Boughton, & Wang 2005). The experimental results show that HNB outperforms naive Bayes, C4.5, SBC, NBTree, and CL-TAN, and is competitive with AODE.