A function estimation approach to sequential learning with neural networks

  • Authors:
  • Visakan Kadirkamanathan;Mahesan Niranjan

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1993

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper, we investigate the problem of optimal sequentiallearning, viewed as a problem of estimating an underlying functionsequentially rather than estimating a set of parameters of theneural network. First, we arrive at a suboptimal solution to thesequential estimate that can be mapped by a growing gaussian radialbasis function (GaRBF) network. This network adds hidden units foreach observation. The function space approach in which theestimates are represented as vectors in a function space is used indeveloping a growth criterion to limit its growth. A simplificationof the criterion leads to two joint criteria on the distance of thepresent pattern and the existing unit centers in the input spaceand on the approximation error of the network for the givenobservation to be satisfied together. This network is similar tothe resource allocating network (RAN) (Platt 1991a) and hence RANcan be interpreted from a function space approach to sequentiallearning. Second, we present an enhancement to the RAN. The RANeither allocates a new unit based on the novelty of an observationor adapts the network parameters by the LMS algorithm. The functionspace interpretation of the RAN lends itself to an enhancement ofthe RAN in which the extended Kalman filter (EKF) algorithm is usedin place of the LMS algorithm. The performance of the RAN and theenhanced network are compared in the experimental tasks of functionapproximation and time-series prediction demonstrating the superiorperformance of the enhanced network with fewer number of hiddenunits. The approach adopted here has led us toward the minimalnetwork required for a sequential learning problem.