Generalization effects of k-neighbor interpolation training

  • Authors:
  • Takeshi Kawabata

  • Affiliations:
  • NTT Basic Research Laboratories, 3-9-11 Midori-cho Musashino-shi, Tokyo 180, Japan

  • Venue:
  • Neural Computation
  • Year:
  • 1991

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper describes a new training method for a continuous mapping and/or pattern classification neural network that performs local sample-density smoothing. A conventional training method uses point-to-point mapping from an input space to an output space. Even though the mapping may be precise at two given training sample points, there are no guarantees of mapping accuracy at points on a line segment connecting the sample points. This paper first discusses a theory for formulating line-to-line mapping. The theory is called interpolation training. This paper then expands the theory to k-nearest neighbor interpolation. The k-neighbor interpolation training (KNIT) method connects an input sample training point to its k-neighbor points via k line segments. Then, the method maps these k line segments in the input space for each training sample to linear line segments in the output space that interpolate between training output values. Thus, a web structure made by connecting input samples is mapped into the same structure in an output space. The KNIT method reduces the over learning problem caused by point-to-point training by smoothing input/output functions. Simulation tasks show that KNIT improves vowel recognition on a small speech database.