Binary sparse coding

  • Authors:
  • Marc Henniges;Gervasio Puertas;Jörg Bornschein;Julian Eggert;Jörg Lücke

  • Affiliations:
  • FIAS, Goethe-Universität Frankfurt am Main, Germany;FIAS, Goethe-Universität Frankfurt am Main, Germany;FIAS, Goethe-Universität Frankfurt am Main, Germany;Honda Research Institute Europe, Offenbach am Main, Germany;FIAS, Goethe-Universität Frankfurt am Main, Germany

  • Venue:
  • LVA/ICA'10 Proceedings of the 9th international conference on Latent variable analysis and signal separation
  • Year:
  • 2010
  • Ternary sparse coding

    LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study a sparse coding learning algorithm that allows for a simultaneous learning of the data sparseness and the basis functions. The algorithm is derived based on a generative model with binary latent variables instead of continuous-valued latents as used in classical sparse coding. We apply a novel approach to perform maximum likelihood parameter estimation that allows for an efficient estimation of all model parameters. The approach is a new form of variational EM that uses truncated sums instead of factored approximations to the intractable posterior distributions. In contrast to almost all previous versions of sparse coding, the resulting learning algorithm allows for an estimation of the optimal degree of sparseness along with an estimation of the optimal basis functions. We can thus monitor the time-course of the data sparseness during the learning of basis functions. In numerical experiments on artificial data we show that the algorithm reliably extracts the true underlying basis functions along with noise level and data sparseness. In applications to natural images we obtain Gabor-like basis functions along with a sparseness estimate. If large numbers of latent variables are used, the obtained basis functions take on properties of simple cell receptive fields that classical sparse coding or ICA-approaches do not reproduce.