Artificial intelligence
PAC-learnability of determinate logic programs
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Compression, significance and accuracy
ML92 Proceedings of the ninth international workshop on Machine learning
Efficient top-down induction of logic programs
ACM SIGART Bulletin
Probabilistic Languages: A Review and Some Open Questions
ACM Computing Surveys (CSUR)
PROLOG Programming for Artificial Intelligence
PROLOG Programming for Artificial Intelligence
Inductive Logic Programming: From Machine Learning to Software Engineering
Inductive Logic Programming: From Machine Learning to Software Engineering
Foundations of Inductive Logic Programming
Foundations of Inductive Logic Programming
Knowledge Acquisition and Machine Learning
Knowledge Acquisition and Machine Learning
Inductive Logic Programming: Techniques and Applications
Inductive Logic Programming: Techniques and Applications
Learning Logical Definitions from Relations
Machine Learning
Machine Learning
Machine Learning
Applying Probability Measures to Abstract Languages
IEEE Transactions on Computers
ILP with noise and fixed example size: a Bayesian approach
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Depth-bounded discrepancy search
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
A Probabilistic Identification Result
ALT '00 Proceedings of the 11th International Conference on Algorithmic Learning Theory
A Noise Resistant Model Inference System
DS '99 Proceedings of the Second International Conference on Discovery Science
Using ILP to Improve Planning in Hierarchical Reinforcement Learning
ILP '00 Proceedings of the 10th International Conference on Inductive Logic Programming
Learning to "read between the lines" using Bayesian logic programs
ACL '12 Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1
Hi-index | 0.00 |
This paper describes the design of the inductive logic programming system Lime. Instead of employing a greedy covering approach to constructing clauses, Lime employs a Bayesian heuristic to evaluate logic programs as hypotheses. The notion of a simple clause is introduced. These sets of literals may be viewed as subparts of clauses that are effectively independent in terms of variables used. Instead of growing a clause one literal at a time, Lime efficiently combines simple clauses to construct a set of gainful candidate clauses. Subsets of these candidate clauses are evaluated via the Bayesian heuristic to find the final hypothesis. Details of the algorithms and data structures of Lime are discussed. Lime's handling of recursive logic programs is also described. Experimental results to illustrate how LIME achieves its design goals of better noise handling, learning from fixed set of examples (and from only positive data), and of learning recursive logic programs are provided. Experimental results comparing Lime with FOIL and PROGOL in the KRK domain in the presence of noise are presented. It is also shown that the already good noise handling performance of Lime further improves when learning recursive definitions in the presence of noise.