Dictionary learning for sparse approximations with the majorization method

  • Authors:
  • Mehrdad Yaghoobi;Thomas Blumensath;Mike E. Davies

  • Affiliations:
  • Institute for Digital Communication and the Joint Research Institute for Signal and Image Processing, Edinburgh University, Edinburgh, U.K.;Institute for Digital Communication and the Joint Research Institute for Signal and Image Processing, Edinburgh University, Edinburgh, U.K.;Institute for Digital Communication and the Joint Research Institute for Signal and Image Processing, Edinburgh University, Edinburgh, U.K.

  • Venue:
  • IEEE Transactions on Signal Processing
  • Year:
  • 2009

Quantified Score

Hi-index 35.69

Visualization

Abstract

In order to find sparse approximations of signals, an appropriate generative model for the signal class has to be known. If the model is unknown, it can be adapted using a set of training samples. This paper presents a novel method for dictionary learning and extends the learning problem by introducing different constraints on the dictionary. The convergence of the proposed method to a fixed point is guaranteed, unless the accumulation points form a continuum. This holds for different sparsity measures. The majorization method is an optimization method that substitutes the original objective function with a surrogate function that is updated in each optimization step. This method has been used successfully in sparse approximation and statistical estimation [e.g., expectation-maximization (EM)] problems. This paper shows that the majorization method can be used for the dictionary learning problem too. The proposed method is compared with other methods on both synthetic and real data and different constraints on the dictionary are compared. Simulations show the advantages of the proposed method over other currently available dictionary learning methods not only in terms of average performance but also in terms of computation time.