Variable precision rough set model
Journal of Computer and System Sciences
On the Optimality of the Simple Bayesian Classifier under Zero-One Loss
Machine Learning - Special issue on learning with probabilistic representations
Machine Learning - Special issue on learning with probabilistic representations
Lazy Learning of Bayesian Rules
Machine Learning
Rough sets and intelligent data analysis
Information Sciences—Informatics and Computer Science: An International Journal
Learning Weighted Naive Bayes with Accurate Ranking
ICDM '04 Proceedings of the Fourth IEEE International Conference on Data Mining
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
A decision tree-based attribute weighting filter for naive Bayes
Knowledge-Based Systems
A Hybrid Data Mining Approach for Knowledge Extraction and Classification in Medical Databases
ISDA '07 Proceedings of the Seventh International Conference on Intelligent Systems Design and Applications
RoughTree A Classifier with Naive-Bayes and Rough Sets Hybrid in Decision Tree Representation
GRC '07 Proceedings of the 2007 IEEE International Conference on Granular Computing
Rough Sets in Hybrid Soft Computing Systems
ADMA '07 Proceedings of the 3rd international conference on Advanced Data Mining and Applications
Review: Dimensionality reduction based on rough set theory: A review
Applied Soft Computing
A study of cross-validation and bootstrap for accuracy estimation and model selection
IJCAI'95 Proceedings of the 14th international joint conference on Artificial intelligence - Volume 2
The WEKA data mining software: an update
ACM SIGKDD Explorations Newsletter
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Hi-index | 0.00 |
Naïve Bayesian classifier is one of the most effective and efficient classification algorithms. The elegant simplicity and apparent accuracy of naive Bayes (NB) even when the independence assumption is violated, fosters the on-going interest in the model. Rough Sets Theory has been used for different tasks in knowledge discovery and successfully applied in many real-life problems. In this study we make use of rough sets ability, in discovering attributes dependencies, to overcome the NB un-practical assumption. We propose a new algorithm called Rough-Naive Bayes (RNB) that is expected to outperform other current NB variants. RNB is based on adjusting attributes' weights based on their dependencies and contribution to the final decision. Experimental results show that RNB can achieve better performance than NB classifier.