Learning from an approximate theory and noisy examples

  • Authors:
  • Somkiat Tangkitvanich;Masamichi Shimura

  • Affiliations:
  • Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan;Department of Computer Science, Tokyo Institute of Technology, Tokyo, Japan

  • Venue:
  • AAAI'93 Proceedings of the eleventh national conference on Artificial intelligence
  • Year:
  • 1993

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents an approach to a new leanring problem, the problem of learning from an approximate theory and a set of noisy examples. This problem requires a new learning approach since it cannot be satisfactorily solved by either indictive, or analytic learning algorithms or their existing combinations. Our approach can be viewed as an extension of the minimum description length (MDL) principle, and is unique in that it is based on the encoding of the refinement required to transform the given theory into a better theory rather than on the encoding of the resultant theory as in traditional MDL. Experimental results show that, based on our approach. the theory learned from an approximate theory and a set of noisy examples is more accnrate than either the approximate theory itself or a theory learned from the examples alone. This suggests that our approach can combine useful iuformation from both the theory and the training set even though both of them are only partially correct.