A High Parallel Procedure to Initialize the Output Weights of a Radial Basis Function or BP Neural Network

  • Authors:
  • Rossella Cancelliere

  • Affiliations:
  • -

  • Venue:
  • PARA '00 Proceedings of the 5th International Workshop on Applied Parallel Computing, New Paradigms for HPC in Industry and Academia
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

The training of a neural network can be made using many different procedures; they allow to find the weights that minimize the discrepancies between targets and actual outputs of the network. The optimal weights can be found either in a direct way or using iterative techniques; in both cases it's sometimes necessary (or simply useful) to evaluate the pseudo-inverse matrix of the projections of input examples into the function space created by the network. Every operation we have to perform to do this can however become difficult (and sometimes impossible) when the dimension of this matrix is very large, so we deal with a way to subdivide it and to obtain our aim by a high parallel algorithm.