Refutable/Inductive Learning from Neighbor Examples and Its Application to Decision Trees over Patterns

  • Authors:
  • Masako Sato;Yasuhito Mukouchi;Mikiharu Terada

  • Affiliations:
  • -;-;-

  • Venue:
  • Progress in Discovery Science, Final Report of the Japanese Discovery Science Project
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

The paper develops the theory of refutable/inductive learning as a foundation of discovery science from examples. We consider refutable/inductive language learning from positive examples, some of which may be incorrect. The error or incorrectness we consider is the one described uniformly in terms of a distance over strings. We define a k-neighbor closure of a language L as the collection of strings each of which is at most k distant from some string in L. In ordinary learning paradigm, a target language is assumed to belong to a hypothesis space without any guarantee. In this paper, we allow an inference machine to infer a neighbor closure instead of the original language as an admissible approximation. We formalize such kind of learning, and give some sufficient conditions for a hypothesis space.As its application to concrete problems, we deal with languages defined by decision trees over patterns. The problem of learning decision trees over patterns has been studied from a viewpoint of knowledge discovery for Genome information processing in the framework of PAC learning from both positive and negative examples. We investigate their learnability in the limit from neighbor examples as well as refutable learnability from complete examples, i.e., from both positive and negative examples. Furthermore, we present some procedures which plays an important role for designing efficient learning algorithms for decision trees over regular patterns.