Rule induction with CN2: some recent improvements
EWSL-91 Proceedings of the European working session on learning on Machine learning
FOSSIL: a robust relational learner
ECML-94 Proceedings of the European conference on machine learning on Machine Learning
Unifying instance-based and rule-based induction
Machine Learning
Pruning Algorithms for Rule Learning
Machine Learning
Wrappers for feature subset selection
Artificial Intelligence - Special issue on relevance
Separate-and-Conquer Rule Learning
Artificial Intelligence Review
An Adjustable Description Quality Measure for Pattern Discovery Usingthe AQ Methodology
Journal of Intelligent Information Systems - Special issue on methodologies for intelligent information systems
Learning Logical Definitions from Relations
Machine Learning
Machine Learning
A Framework for Learning Rules from Multiple Instance Data
EMCL '01 Proceedings of the 12th European Conference on Machine Learning
ALT '95 Proceedings of the 6th International Conference on Algorithmic Learning Theory
Attribute-Value Learning Versus Inductive Logic Programming: The Missing Links (Extended Abstract)
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Rule Evaluation Measures: A Unifying View
ILP '99 Proceedings of the 9th International Workshop on Inductive Logic Programming
SMILE: Sound Multi-agent Incremental LEarning
Proceedings of the 6th international joint conference on Autonomous agents and multiagent systems
Empirical Study of Relational Learning Algorithms in the Phase Transition Framework
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Hi-index | 0.00 |
In this paper, we close the gap between the simple and straight-forward implementations of top-down hill-climbing that can be found in the literature, and the rather complex strategies for greedy bottom-up generalization. Our main result is that the simple bottom-up counterpart to the top-down hill-climbing algorithm is unable to learn in domains with dispersed examples. In particular, we show that guided greedy generalization is impossible if the seed example differs in more than one attribute value from its nearest neighbor. We also perform an empirical study of the commonness of this problem is in popular benchmark datasets, and present average-case and worst-case results for the probability of drawing a pathological seed example in binary domains.