Machine Learning - Special issue on learning with probabilistic representations
Extending naïve Bayes classifiers using long itemsets
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Lazy Learning of Bayesian Rules
Machine Learning
Machine Learning
A Method to Boost Naïve Bayesian Classifiers
PAKDD '02 Proceedings of the 6th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining
Not So Naive Bayes: Aggregating One-Dependence Estimators
Machine Learning
HODE: Hidden One-Dependence Estimator
ECSQARU '09 Proceedings of the 10th European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty
A Novel Bayes Model: Hidden Naive Bayes
IEEE Transactions on Knowledge and Data Engineering
Scaling Up the Accuracy of Bayesian Network Classifiers by M-Estimate
ICIC '07 Proceedings of the 3rd International Conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications. With Aspects of Artificial Intelligence
Weightily averaged one-dependence estimators
PRICAI'06 Proceedings of the 9th Pacific Rim international conference on Artificial intelligence
Ensemble selection for superparent-one-dependence estimators
AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
Hi-index | 0.00 |
Frequent Itemsets Mining Classifier (FISC) is an improved Bayesian classifier which averaging all classifiers built by frequent itemsets. Considering that in learning Bayesian network classifier, estimating probabilities from a given set of training examples is crucial, and it has been proved that m-estimate can scale up the accuracy of many Bayesian classifiers. Thus, a natural question is whether FISC with m-estimate can perform even better. Response to this problem, in this paper, we aim to scale up the accuracy of FISC by m-estimate and propose new probability estimation formulas. The experimental results show that the Laplace estimate used in the original FISC performs not very well and our m-estimate can greatly scale up the accuracy, it even outperforms other outstanding Bayesian classifiers used to compare.