Training of sparsely connected MLPs

  • Authors:
  • Markus Thom;Roland Schweiger;Günther Palm

  • Affiliations:
  • Department Environment Perception, Daimler AG, Ulm, Germany;Department Environment Perception, Daimler AG, Ulm, Germany;Institute of Neural Information Processing, University of Ulm, Germany

  • Venue:
  • DAGM'11 Proceedings of the 33rd international conference on Pattern recognition
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Sparsely connected Multi-Layer Perceptrons (MLPs) differ from conventional MLPs in that only a small fraction of entries in their weight matrices are nonzero. Using sparse matrix-vector multiplication algorithms reduces the computational complexity of classification. Training of sparsely connected MLPs is achieved in two consecutive stages. In the first stage, initial values for the network's parameters are given by the solution to an unsupervised matrix factorization problem, minimizing the reconstruction error. In the second stage, a modified version of the supervised backpropagation algorithm optimizes the MLP's parameters with respect to the classification error. Experiments on the MNIST database of handwritten digits show that the proposed approach achieves equal classification performance compared to a densely connected MLP while speeding-up classification by a factor of seven.