ILP '96 Selected Papers from the 6th International Workshop on Inductive Logic Programming
Distance Between Herbrand Interpretations: A Measure for Approximations to a Target Concept
ILP '97 Proceedings of the 7th International Workshop on Inductive Logic Programming
Instance Based Function Learning
ILP '99 Proceedings of the 9th International Workshop on Inductive Logic Programming
ILP '99 Proceedings of the 9th International Workshop on Inductive Logic Programming
ILP with noise and fixed example size: a Bayesian approach
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Induction of first-order decision lists: results on learning the past tense of English verbs
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
The Bayesian framework of learning from positive noise-free examples derived by Muggleton [12] is extended to learning functional hypotheses from positive examples containing normally distributed noise in the outputs. The method subsumes a type of distance based learning as a special case. We also present an effective method of outlier-identification which may significantly improve the predictive accuracy of the final multi-clause hypothesis if it is constructed by a clause-by-clause covering algorithm as e.g. in Progol or Aleph. Our method is implemented in Aleph and tested on two experiments, one of which concerns numeric functions while the other treats non-numeric discrete data where the normal distribution is taken as an approximation of the discrete distribution of noise.