The More We Learn the Less We Know?: On Inductive Learning from Examples

  • Authors:
  • Piotr Ejdys;Grzegorz Góra

  • Affiliations:
  • Institute of Mathematics, Warsaw University, ul. Banacha 2, 02-097 Warsaw, Poland. pe146810@zodiac2.mimuw.edu.pl, gg153390@zodiac2.mimuw.edu.pl;(Correspd.) Institute of Mathematics, Warsaw University, ul. Banacha 2, 02-097 Warsaw, Poland. pe146810@zodiac2.mimuw.edu.pl, gg153390@zodiac2.mimuw.edu.pl

  • Venue:
  • Fundamenta Informaticae
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

We consider the average error rate of classification as a function of the number of training examples. We investigate the upper and the lower bounds of this error in a class of commonly used algorithms based on inductive learning from examples. As a result, we arrive at the astonishing conclusion that, contrary to what one could expect, the error rate of some algorithms does not decrease monotonically with number of training examples; rather, it initially increases up to a certain point and only then it starts to decrease. Furthermore, the classification quality of some algorithms is as poor as that of the naive algorithm. We show that for simple monomials, even if we take an exponentially large training data set, the classification quality of some methods will not be better than if we have taken just one or several training examples.