Mining association rules between sets of items in large databases
SIGMOD '93 Proceedings of the 1993 ACM SIGMOD international conference on Management of data
An effective hash-based algorithm for mining association rules
SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Data mining using two-dimensional optimized association rules: scheme, algorithms, and visualization
SIGMOD '96 Proceedings of the 1996 ACM SIGMOD international conference on Management of data
Dynamic itemset counting and implication rules for market basket data
SIGMOD '97 Proceedings of the 1997 ACM SIGMOD international conference on Management of data
Beyond market baskets: generalizing association rules to correlations
SIGMOD '97 Proceedings of the 1997 ACM SIGMOD international conference on Management of data
Mining the most interesting rules
KDD '99 Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining
Mining frequent patterns without candidate generation
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Turbo-charging vertical mining of large databases
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Generating non-redundant association rules
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Efficient search for association rules
Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining
Real world performance of association rule algorithms
Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining
Machine Learning
Abstract-Driven Pattern Discovery in Databases
IEEE Transactions on Knowledge and Data Engineering
Rule Induction with CN2: Some Recent Improvements
EWSL '91 Proceedings of the European Working Session on Machine Learning
Mining the Smallest Association Rule Set for Predictions
ICDM '01 Proceedings of the 2001 IEEE International Conference on Data Mining
Fast Algorithms for Mining Association Rules in Large Databases
VLDB '94 Proceedings of the 20th International Conference on Very Large Data Bases
Selecting the right interestingness measure for association patterns
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Constraint-Based Rule Mining in Large, Dense Databases
ICDE '99 Proceedings of the 15th International Conference on Data Engineering
Finding Interesting Associations without Support Pruning
ICDE '00 Proceedings of the 16th International Conference on Data Engineering
Mining Informative Rule Set for Prediction
Journal of Intelligent Information Systems
OPUS: an efficient admissible algorithm for unordered search
Journal of Artificial Intelligence Research
Clustering web images using association rules, interestingness measures, and hypergraph partitions
ICWE '06 Proceedings of the 6th international conference on Web engineering
AIA'06 Proceedings of the 24th IASTED international conference on Artificial intelligence and applications
Mining informative rule set for prediction over a sliding window
ACIIDS'10 Proceedings of the Second international conference on Intelligent information and database systems: Part II
Hi-index | 0.00 |
An association rule generation algorithm usually generatestoo many rules including a lot of uninteresting ones.Many interestingness criteria are proposed to prune thoseuninteresting rules. However, they work in post-pruningprocess and hence do not improve the rule generation ef拢ciency. In this paper, we discuss properties of informativerule set and conclude that the informative rule set includesall interesting rules measured by many commonly used interestingnesscriteria, and that rules excluded by the informativerule set are forwardly prunable, i.e. they can be removedin the rule generation process instead of post pruning.Based on these properties, we propose a Direct Interestingrule Generation algorithm, DIG, to directly generateinteresting rules de拢ned by any of 12 interestingness criteriadiscussed in this paper. We further show experimentallythat DIG is faster and uses less memory than Apriori.