Sparse Coding Neural Gas: Learning of overcomplete data representations

  • Authors:
  • Kai Labusch;Erhardt Barth;Thomas Martinetz

  • Affiliations:
  • Institute for Neuro- and Bioinformatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany;Institute for Neuro- and Bioinformatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany;Institute for Neuro- and Bioinformatics, University of Lübeck, Ratzeburger Allee 160, 23538 Lübeck, Germany

  • Venue:
  • Neurocomputing
  • Year:
  • 2009

Quantified Score

Hi-index 0.01

Visualization

Abstract

We consider the problem of learning an unknown (overcomplete) basis from data that are generated from unknown and sparse linear combinations. Introducing the Sparse Coding Neural Gas algorithm, we show how to employ a combination of the original Neural Gas algorithm and Oja's rule in order to learn a simple sparse code that represents each training sample by only one scaled basis vector. We generalize this algorithm by using Orthogonal Matching Pursuit in order to learn a sparse code where each training sample is represented by a linear combination of up to k basis elements. We evaluate the influence of additive noise and the coherence of the original basis on the performance with respect to the reconstruction of the original basis and compare the new method to other state of the art methods. For this analysis, we use artificial data where the original basis is known. Furthermore, we employ our method to learn an overcomplete representation for natural images and obtain an appealing set of basis functions that resemble the receptive fields of neurons in the primary visual cortex. An important result is that the algorithm converges even with a high degree of overcompleteness. A reference implementation of the methods is provided.