Semi-naive Bayesian classifier
EWSL-91 Proceedings of the European working session on learning on Machine learning
Representation and learning in information retrieval
Representation and learning in information retrieval
Machine learning, neural and statistical classification
Machine learning, neural and statistical classification
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning
Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval
ECML '98 Proceedings of the 10th European Conference on Machine Learning
Data Mining: Practical Machine Learning Tools and Techniques, Second Edition (Morgan Kaufmann Series in Data Management Systems)
Multi criteria wrapper improvements to naive bayes learning
IDEAL'06 Proceedings of the 7th international conference on Intelligent Data Engineering and Automated Learning
Building a Spanish MMTx by Using Automatic Translation and Biomedical Ontologies
IDEAL '08 Proceedings of the 9th International Conference on Intelligent Data Engineering and Automated Learning
Hi-index | 0.00 |
The Naive Bayes Classifier is based on the (unrealistic) assumption of independence among the values of the attributes given the class value. Consequently, its effectiveness may decrease in the presence of interdependent attributes. In spite of this, in recent years, Naive Bayes classifier is worked for a privilege position due to several reasons [1]. We present DGW (Dependency Guided Wrapper), a wrapper that uses information about dependences to transform the data representation to improve the Naive Bayes classification. This paper presents experiments comparing the performance and execution time of 12 DGW variations against 12 previous approaches, as constructive induction of cartesian product attributes, and wrappers that perform a search for optimal subsets of attributes. Experimental results show that DGW generates a new data representation that allows the Naive Bayes to obtain better accuracy more times than any other wrapper tested. DGW variations also obtain the best possible accuracy more often than the state of the art wrappers while often spending less time in the attribute subset search process.