Topology and geometry of single hidden layer network, least squares weight solutions

  • Authors:
  • Frans M. Coetzee;Virginia L. Stonick

  • Affiliations:
  • -;-

  • Venue:
  • Neural Computation
  • Year:
  • 1995

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper the topological and geometric properties of theweight solutions for multilayer perceptron (MLP) networks under theMSE error criterion are characterized. The characterization isobtained by analyzing a homotopy from linear to nonlinear networksin which the hidden node function is slowly transformed from alinear to the final sigmoidal nonlinearity. Two different geometricperspectives for this optimization process are developed. Thegeneric topology of the nonlinear MLP weight solutions is describedand related to the geometric interpretations, error surfaces, andhomotopy paths, both analytically and using carefully constructedexamples. These results illustrate that although the naturalhomotopy provides a practically valuable heuristic for training, itsuffers from a number of theoretical and practical difficulties.The linear system is a bifurcation point of the homotopy equations,and solution paths are therefore generically discontinuous.Bifurcations and infinite solutions further occur for data setsthat are not of measure zero. These results weaken the guaranteeson global convergence and exhaustive behavior normally associatedwith homotopy methods. However, the analyses presented provide aclear understanding of the relationship between linear andnonlinear perceptron networks, and thus a firm foundation fordevelopment of more powerful training methods. The geometricperspectives and generic topological results describing the natureof the solutions are further generally applicable to networkanalysis and algorithm evaluation.