Contrastive divergence in gaussian diffusions

  • Authors:
  • Javier R. Movellan

  • Affiliations:
  • Institute for Neural Computation, University of California San Diego, La Jolla, CA 92093-0515, U.S.A. movellan@mplab.ucsd.edu

  • Venue:
  • Neural Computation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This letter presents an analysis of the contrastive divergence (CD) learning algorithm when applied to continuous-time linear stochastic neural networks. For this case, powerful techniques exist that allow a detailed analysis of the behavior of CD. The analysis shows that CD converges to maximum likelihood solutions only when the network structure is such that it can match the first moments of the desired distribution. Otherwise, CD can converge to solutions arbitrarily different from the log-likelihood solutions, or they can even diverge. This result suggests the need to improve our theoretical understanding of the conditions under which CD is expected to be well behaved and the conditions under which it may fail. In, addition the results point to practical ideas on how to improve the performance of CD.