Efficient learning in Boltzmann machines using linear response theory
Neural Computation
Boltzmann machine learning using mean field theory and linear response correction
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Pairwise Data Clustering by Deterministic Annealing
IEEE Transactions on Pattern Analysis and Machine Intelligence
Stochastic reasoning, free energy, and information geometry
Neural Computation
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
Hi-index | 0.00 |
I present a general theory of mean-field approximation based on information geometry and applicable not only to Boltzmann machines but also to wider classes of statistical models. Using perturbation expansion of the Kullback divergence (or Plefka expansion in statistical physics), a formulation of mean-field approximation of general orders is derived. It includes in a natural way the “naive” mean-field approximation and is consistent with the Thouless-Anderson-Palmer (TAP) approach and the linear response theorem in statistical physics.