Separate-and-Conquer Rule Learning
Artificial Intelligence Review
A simple, fast, and effective rule learner
AAAI '99/IAAI '99 Proceedings of the sixteenth national conference on Artificial intelligence and the eleventh Innovative applications of artificial intelligence conference innovative applications of artificial intelligence
Generating Accurate Rule Sets Without Global Optimization
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
ICML '00 Proceedings of the Seventeenth International Conference on Machine Learning
Towards tight bounds for rule learning
ICML '04 Proceedings of the twenty-first international conference on Machine learning
Margin-based first-order rule learning
Machine Learning
A decoupled approach to exemplar-based unsupervised learning
Proceedings of the 25th international conference on Machine learning
Margin-Based First-Order Rule Learning
Inductive Logic Programming
Optimizing Feature Sets for Structured Data
ECML '07 Proceedings of the 18th European conference on Machine Learning
Minimum variance associations: discovering relationships in numerical data
PAKDD'08 Proceedings of the 12th Pacific-Asia conference on Advances in knowledge discovery and data mining
Ensembles of jittered association rule classifiers
Data Mining and Knowledge Discovery
Sequential covering rule induction algorithm for variable consistency rough set approaches
Information Sciences: an International Journal
Hi-index | 0.00 |
We present a new, statistical approach to rule learning. Doing so, we address two of the problems inherent in traditional rule learning: The computational hardness of finding rule sets with low training error and the need for capacity control to avoid over-fitting. The chosen representation involves weights attached to rules. Instead of optimizing the error rate directly, we optimize for rule sets that have large margin and low variance. This can be formulated as a convex optimization problem allowing for efficient computation. Given the representation and the optimization procedure, we effectively yield weighted clauses in a CNF-like representation. To avoid overfitting, we propose a model selection strategy that utilizes a novel concentration inequality. Empirical tests show that the system is competitive with existing rule learning algorithms and that its flexible learning bias can be adjusted to improve predictive accuracy considerably.