Quantifying inductive bias: AI learning algorithms and Valiant's learning framework
Artificial Intelligence
C4.5: programs for machine learning
C4.5: programs for machine learning
Resource-bounded reasoning in intelligent systems
ACM Computing Surveys (CSUR) - Special issue: position statements on strategic directions in computing research
Journal of the ACM (JACM)
Learning Logical Definitions from Relations
Machine Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
An Experimental Evaluation of Coevolutive Concept Learning
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
The Problem with Noise and Small Disjuncts
ICML '98 Proceedings of the Fifteenth International Conference on Machine Learning
Stochastic Propositionalization of Non-determinate Background Knowledge
ILP '98 Proceedings of the 8th International Workshop on Inductive Logic Programming
Tractable induction and classification in first order logic via stochastic matching
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
The predictive toxicology evaluation challenge
IJCAI'97 Proceedings of the 15th international joint conference on Artifical intelligence - Volume 1
Generalizing Data in Natural Language
RSEISP '07 Proceedings of the international conference on Rough Sets and Intelligent Systems Paradigms
Hi-index | 0.00 |
By automatically reformulating the problem domain, constructive induction ideally overcomes the defects of the initial description. The reformulation presented here uses the Version Space primitives D(E, F), defined for any pair of examples E and F, as the set of hypotheses covering E and discriminating F. From these primitives we derive a polynomial number of M-of-N concept. Experimentally, many of these concepts turn out to be significant and consistent. A simple learning strategy thus consists of exhaustively exploring these concepts, and retaining those with sufficient quality. Tunable complexity is achieved in the MONKEI algorithm, by considering a user-supplied number of primitives D(Ei, Fi), where Ei and Fi are stochastically sampled in the training set. MONKEI demonstrates good performances on some benchmark problems, and obtains outstanding results on the Predictive Toxicology Evaluation challenge.