Dictionary learning with large step gradient descent for sparse representations

  • Authors:
  • Boris Mailhé;Mark D. Plumbley

  • Affiliations:
  • School of Electronic Engineering and Computer Science, Centre for Digital Music, Queen Mary University of London, London, United Kingdom;School of Electronic Engineering and Computer Science, Centre for Digital Music, Queen Mary University of London, London, United Kingdom

  • Venue:
  • LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work presents a new algorithm for dictionary learning. Existing algorithms such as MOD and K-SVD often fail to find the best dictionary because they get trapped in a local minimum. Olshausen and Field's Sparsenet algorithm relies on a fixed step projected gradient descent. With the right step, it can avoid local minima and converge towards the global minimum. The problem then becomes to find the right step size. In this work we provide the expression of the optimal step for the gradient descent but the step we use is twice as large as the optimal step. That large step allows the descent to bypass local minima and yields significantly better results than existing algorithms. The algorithms are compared on synthetic data. Our method outperforms existing algorithms both in approximation quality and in perfect recovery rate if an oracle support for the sparse representation is provided.