Heuristics for the selection of weights in sequential feed-forward neural networks: An experimental study

  • Authors:
  • Enrique Romero;René Alquézar

  • Affiliations:
  • Departament de Llenguatges i Sistemes Informítics, Universitat Politècnica de Catalunya, Barcelona, Spain;Departament de Llenguatges i Sistemes Informítics, Universitat Politècnica de Catalunya, Barcelona, Spain

  • Venue:
  • Neurocomputing
  • Year:
  • 2007

Quantified Score

Hi-index 0.03

Visualization

Abstract

The selection of weights of the new hidden units for sequential feed-forward neural networks (FNNs) usually involves a non-linear optimization problem that cannot be solved analytically in the general case. A suboptimal solution is searched heuristically. Most models found in the literature choose the weights in the first layer that correspond to each hidden unit so that its associated output vector matches the previous residue as best as possible. The weights in the second layer can be either optimized (in a least-squares sense) or not. Several exceptions to the idea of matching the residue perform an (implicit or explicit) orthogonalization of the output vectors of the hidden units. In this case, the weights in the second layer are always optimized. An experimental study of the aforementioned approaches to select the weights for sequential FNNs is presented. Our results indicate that the orthogonalization of the output vectors of the hidden units outperforms the strategy of matching the residue, both for approximation and generalization purposes.