Regularized neural networks: some convergence rate results

  • Authors:
  • Valentina Corradi;Halbert White

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

In a recent paper, Poggio and Girosi (1990) proposed a class ofneural networks obtained from the theory of regularization.Regularized networks are capable of approximating arbitrarily wellany continuous function on a compactum. In this paper we considerin detail the learning problem for the one-dimensional case. Weshow that in the case of output data observed with noise,regularized networks are capable of learning and approximating (oncompacta) elements of certain classes of Sobolev spaces, known asreproducing kernel Hilbert spaces (RKHS), at a nonparametric ratethat optimally exploits the smoothness properties of the unknownmapping. In particular we show that the total squared error, givenby the sum of the squared bias and the variance, will approach zeroat a rate ofn(-2m)/(2m+1), wherem denotes the order of differentiability of the true unknownfunction. On the other hand, if the unknown mapping is a continuousfunction but does not belong to an RKHS, then there still exists aunique regularized solution, but this is no longer guaranteed toconverge in mean square to a well-defined limit. Further, even ifsuch a solution converges, the total squared error is bounded awayfrom zero for all n sufficiently large.