Learning a decision boundary from stochastic examples: incremental algorithms with and without queries

  • Authors:
  • Yoshiyuki Kabashima;Shigeru Shinomoto

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

Even if it is not possible to reproduce a target input-outputrelation, a learning machine should be able to minimize theprobability of making errors. A practical learning algorithm shouldalso be simple enough to go without memorizing example data, ifpossible. Incremental algorithms such as error backpropagationsatisfy this requirement. We propose incremental algorithms thatprovide fast convergence of the machine parameter θ to itsoptimal choice θo with respect to the number ofexamples t. We will consider the binary choice model whosetarget relation has a blurred boundary and the machine whoseparameter θ specifies a decision boundary to make the outputprediction. The question we wish to address here is how fastθ can approach θo, depending upon whether inthe learning stage the machine can specify inputs as queries to thetarget relation, or the inputs are drawn from a certaindistribution. If queries are permitted, the machine can achieve thefastest convergence, (θ-θo)2 O(t-1). If not, O(t-1)convergence is generally not attainable. For learning withoutqueries, we showed in a previous paper that the error minimumalgorithm exhibits a slow convergence(θ-θo)2 O(t-2/3). We propose here a practical algorithmthat provides a rather fast convergence,O(t-4/5). It is possible to furtheraccelerate the convergence by using more elaborate algorithms. Thefastest convergence turned out to beO[(lnt)2t-1]. Thisscaling is considered optimal among possible algorithms, and is notdue to the incremental nature of our algorithm.