Handbook of logic in artificial intelligence and logic programming (vol. 3)
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Scaling Up Inductive Logic Programming by Learning from Interpretations
Data Mining and Knowledge Discovery
IEEE Transactions on Knowledge and Data Engineering
Learning Logical Definitions from Relations
Machine Learning
Machine Learning
Induction, Abduction, and Consequence-Finding
ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
Induction of first-order decision lists: results on learning the past tense of English verbs
Journal of Artificial Intelligence Research
Efficient and effective induction of first order decision lists
ILP'02 Proceedings of the 12th international conference on Inductive logic programming
Introducing possibilistic logic in ILP for dealing with exceptions
Artificial Intelligence
Version space learning for possibilistic hypotheses
Proceedings of the 2006 conference on ECAI 2006: 17th European Conference on Artificial Intelligence August 29 -- September 1, 2006, Riva del Garda, Italy
Hi-index | 0.00 |
Learning rules with exceptions may be of interest, especially if the exceptions are not important in some sense. Standard Inductive Logic Programming (ILP) algorithms and classical first order logic are not well-suited for managing rules with exceptions. Indeed, a hypothesis that is induced accumulates all the exceptions of the rules contained in it. Moreover, with multiple-class problems, classifying an example in two different classes (even if one is the right one) is not correct, so a rule that contains some exceptions may prevent another rule which has no exception from being useful. This paper proposes a new possibilistic logic framework for weighted ILP. It induces rules which are progressively more and more accurate, and allows us to manage exceptions by controlling their accumulation. In this setting, we first propose an algorithm for learning rules when the background knowledge and the examples are stratified into layers having different levels of priority or certainty. This allows the induction of general but uncertain rules together with more specific and less uncertain rules. A second algorithm is presented, which does not require an initial weighted database, but still learn a default set of rules in the possibilistic setting.