On the geometric convergence of neural approximations

  • Authors:
  • E. Lavretsky

  • Affiliations:
  • Boeing Company-Phantom Works, Huntington Beach, CA

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

We give upper bounds rates of approximation of a set of functions from a real Hilbert space, using convex greedy iterations. The approximation method was originally proposed and analyzed by Jones (1992). Barron (1993) applied the method to the set of functions computable by single-hidden-layer feedforward neural networks. It was shown that the networks achieve an integrated squared error of order O(1/n), where n is the number of iterations, or equivalently, nodes in the network. Assuming that the functions to be approximated satisfy the so-called δ-angular condition, we show that the corresponding rate of approximation of order O(qn) is achievable, where 0 ⩽ q < 1. Therefore, for the set of functions considered, the reported geometrical rate of approximation is an improvement of Maurey-Jones-Barron's upper bound result. In the case of orthonormal convex greedy approximations, the δ-angular condition is shown to be equivalent to the geometrically decaying expansion coefficients. In finite dimensions the δ-angular condition is proven to take place for a wide class of functions