Learning Functions from Imperfect Positive Data

  • Authors:
  • Filip Zelezný

  • Affiliations:
  • -

  • Venue:
  • ILP '01 Proceedings of the 11th International Conference on Inductive Logic Programming
  • Year:
  • 2001

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Bayesian framework of learning from positive noise-free examples derived by Muggleton [12] is extended to learning functional hypotheses from positive examples containing normally distributed noise in the outputs. The method subsumes a type of distance based learning as a special case. We also present an effective method of outlier-identification which may significantly improve the predictive accuracy of the final multi-clause hypothesis if it is constructed by a clause-by-clause covering algorithm as e.g. in Progol or Aleph. Our method is implemented in Aleph and tested on two experiments, one of which concerns numeric functions while the other treats non-numeric discrete data where the normal distribution is taken as an approximation of the discrete distribution of noise.