Efficient learning in Boltzmann machines using linear response theory
Neural Computation
Approximate inference in Boltzmann machines
Artificial Intelligence
Training products of experts by minimizing contrastive divergence
Neural Computation
Linear response algorithms for approximate inference in graphical models
Neural Computation
Estimating the "Wrong" Graphical Model: Benefits in the Computation-Limited Setting
The Journal of Machine Learning Research
Constructing free-energy approximations and generalized belief propagation algorithms
IEEE Transactions on Information Theory
Hi-index | 0.00 |
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.