Dynamic stochastic synapses as computational units
Neural Computation
Learning in Gibbsian Fields: How Accurate and How Fast Can It Be?
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training products of experts by minimizing contrastive divergence
Neural Computation
Bayesian learning in undirected graphical models: approximate MCMC algorithms
UAI '04 Proceedings of the 20th conference on Uncertainty in artificial intelligence
Estimation of Non-Normalized Statistical Models by Score Matching
The Journal of Machine Learning Research
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Using fast weights to improve persistent contrastive divergence
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Herding dynamical weights to learn
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Quickly generating representative samples from an rbm-derived process
Neural Computation
On the expressive power of deep architectures
ALT'11 Proceedings of the 22nd international conference on Algorithmic learning theory
Hi-index | 0.00 |
Learning the parameters of a (potentially partially observable) random field model is intractable in general. Instead of focussing on a single optimal parameter value we propose to treat parameters as dynamical quantities. We introduce an algorithm to generate complex dynamics for parameters and (both visible and hidden) state vectors. We show that under certain conditions averages computed over trajectories of the proposed dynamical system converge to averages computed over the data. Our "herding dynamics" does not require expensive operations such as exponentiation and is fully deterministic.