Original Contribution: Efficient learning procedures for optimal interpolative nets

  • Authors:
  • Sam-Kit Sin;Rui J. P. DeFigueiredo

  • Affiliations:
  • -;-

  • Venue:
  • Neural Networks
  • Year:
  • 1993

Quantified Score

Hi-index 0.01

Visualization

Abstract

The Optimal Interpolative (OI) net, derived earlier by one of the authors based on a generalized Fock (GF) space formulation, provides a closed form solution to the synaptic weights in a two-layer feedforward neural network. In this paper, a recursive learning algorithm called RLS-OI is introduced, which extends the original OI solution to a recursive least squares solution. This is an enhanced version of a learning procedure we derived recently also based on the OI solution, which brings in new features such as a procedure for the selection of a good training sequence and a scaling factor in the nonlinear activation function to the hidden neurons. A more general approach is used in its derivation which makes it possible to apply the same technique with different forms of activation functions or basis functions. The proposed algorithm selects a small but meaningful (consistent) subset of exemplars known as prototypes and recursively updates the synaptic weights of the neural network as each new prototype is found. RLS-OI is a noniterative algorithm and is guaranteed to produce a solution in a number of recursions no larger than the number of exemplars in the training set. It also requires no prespecification of the architecture of the neural network to be learned but evolves a net from scratch to the extent sufficient to solve a given problem. The method for picking prototypes and the recursive relations needed to implement the algorithm are described in detail. Simulation results are provided to illustrate our approach.