A method of association rule analysis for incomplete database using genetic network programming
Proceedings of the 12th annual conference on Genetic and evolutionary computation
Categorize arabic data sets using multi-class classification based on association rule approach
Proceedings of the 2011 International Conference on Intelligent Semantic Web-Services and Applications
Structured data classification by means of matrix factorization
Proceedings of the 20th ACM international conference on Information and knowledge management
I-prune: Item selection for associative classification
International Journal of Intelligent Systems
An evolving associative classifier for incomplete database
ICDM'12 Proceedings of the 12th Industrial conference on Advances in Data Mining: applications and theoretical aspects
Improving classification accuracy of associative classifiers by using k-conflict-rule preservation
Proceedings of the 7th International Conference on Ubiquitous Information Management and Communication
Journal of Intelligent Manufacturing
Discovering diverse association rules from multidimensional schema
Expert Systems with Applications: An International Journal
Hi-index | 0.00 |
Associative classification is a promising technique to build accurate classifiers. However, in large or correlated datasets, association rule mining may yield huge rule sets. Hence, several pruning techniques have been proposed to select a small subset of high quality rules. We argue that rule pruning should be reduced to a minimum, since the availability of a "rich" rule set may improve the accuracy of the classifier. The L^3 associative classifier is built by means of a lazy pruning technique which discards exclusively rules that only misclassify training data. Classification of unlabeled data is performed in two steps. A small subset of high quality rules is first considered. When this set is not able to classify the data, a larger rule set is exploited. This second set includes rules usually discarded by previous approaches. To cope with the need of mining large rule sets and efficiently use them for classification, a compact form is proposed to represent a complete rule set in a space-efficient way and without information loss. An extensive experimental evaluation on real and synthetic datasets shows that L^3 improves the classification accuracy with respect to previous approaches.