A systematic investigation of a neural network for function approximation

  • Authors:
  • Leila Ait Gougam;Mouloud Tribeche;Fawzia Mekideche-Chafa

  • Affiliations:
  • Theoretical Physics Laboratory, Faculty of Sciences-Physics, University of Bab-Ezzouar, USTHB, B.P. 32, El Alia, 16111, Algiers, Algeria;Theoretical Physics Laboratory, Faculty of Sciences-Physics, University of Bab-Ezzouar, USTHB, B.P. 32, El Alia, 16111, Algiers, Algeria;Theoretical Physics Laboratory, Faculty of Sciences-Physics, University of Bab-Ezzouar, USTHB, B.P. 32, El Alia, 16111, Algiers, Algeria

  • Venue:
  • Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

A model which takes advantage of wavelet-like functions in the functional form of a neural network is used for function approximation. The scale parameters are mainly used, neglecting the usual translation parameters in the function expansion. Two training operations are then investigated. The first one consists of optimizing the output synaptic weights and the second one on optimizing the scale parameters hidden inside the elementary tasks. Building upon previously published results, it is found that if (p+1) scale parameters merge during the learning process, derivatives of order p will emerge spontaneously in the functional basis. It is also found that for those tasks which induce such mergings, the function approximation can be improved and the training time reduced by directly implementing the elementary tasks and their derivatives in the functional basis. Attention has been also devoted to the role transfer functions, number of iterations, and formal neurons number may play during and after the learning process. The results complement previously published results on this problem.