Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Connectionist learning of belief networks
Artificial Intelligence
Neural Computation
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Minimax and Hamiltonian dynamics of excitatory-inhibitory networks
NIPS '97 Proceedings of the 1997 conference on Advances in neural information processing systems 10
Mean field theory for sigmoid belief networks
Journal of Artificial Intelligence Research
Training products of experts by minimizing contrastive divergence
Neural Computation
Bringing Consciousness To Cognitive Neuroscience: A Computational Perspective
Journal of Integrated Design & Process Science
Mean-field methods for a special class of belief networks
Journal of Artificial Intelligence Research
Hi-index | 0.00 |
We study the probabilistic generative models parameterized by feedforward neural networks. An attractor dynamics for probabilistic inference in these models is derived from a mean field approximation for large, layered sigmoidal networks. Fixed points of the dynamics correspond to solutions of the mean field equations, which relate the statistics of each unit to those of its Markov blanket. We establish global convergence of the dynamics by providing a Lyapunov function and show that the dynamics generate the signals required for unsupervised learning. Our results for feedforward networks provide a counterpart to those of Cohen-Grossberg and Hopfield for symmetric networks.