The Strength of Weak Learnability
Machine Learning
C4.5: programs for machine learning
C4.5: programs for machine learning
Dynamic itemset counting and implication rules for market basket data
SIGMOD '97 Proceedings of the 1997 ACM SIGMOD international conference on Management of data
Extending naïve Bayes classifiers using long itemsets
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
CMAR: Accurate and Efficient Classification Based on Multiple Class-Association Rules
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
A decision-theoretic generalization of on-line learning and an application to boosting
EuroCOLT '95 Proceedings of the Second European Conference on Computational Learning Theory
Theoretical Computer Science
Constraint-Based Rule Mining in Large, Dense Databases
ICDE '99 Proceedings of the 15th International Conference on Data Engineering
Multistrategy Ensemble Learning: Reducing Error by Combining Ensemble Learning Techniques
IEEE Transactions on Knowledge and Data Engineering
Pruning in ordered bagging ensembles
ICML '06 Proceedings of the 23rd international conference on Machine learning
An experiment with association rules and classification: post-bagging and conviction
DS'05 Proceedings of the 8th international conference on Discovery Science
Ensembles of jittered association rule classifiers
Data Mining and Knowledge Discovery
Ensemble approaches for regression: A survey
ACM Computing Surveys (CSUR)
Hi-index | 0.00 |
We study a new method for improving the classification accuracy of a model composed of classification association rules (CAR). The method consists in reordering the original set of rules according to the error rates obtained on a set of training examples. This is done iteratively, starting from the original set of rules. After obtaining N models these are used as an ensemble for classifying new cases. The net effect of this approach is that the original rule model is clearly improved. This improvement is due to the ensembling of the obtained models, which are, individually, slightly better than the original one. This ensembling approach has the advantage of running a single learning process, since the models in the ensemble are obtained by self replicating the original one.