Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Neural Networks: A Comprehensive Foundation
Neural Networks: A Comprehensive Foundation
Training products of experts by minimizing contrastive divergence
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Why Does Unsupervised Pre-training Help Deep Learning?
The Journal of Machine Learning Research
Learning deep generative models
Learning deep generative models
A connection between score matching and denoising autoencoders
Neural Computation
Improved learning of Gaussian-Bernoulli restricted Boltzmann machines
ICANN'11 Proceedings of the 21th international conference on Artificial neural networks - Volume Part I
Hi-index | 0.00 |
In this paper, we study a Tikhonov-type regularization for restricted Boltzmann machines (RBM). We present two alternative formulations of the Tikhonov-type regularization which encourage an RBM to learn a smoother probability distribution. Both formulations turn out to be combinations of the widely used weight-decay and sparsity regularization. We empirically evaluate the effect of the proposed regularization schemes and show that the use of them could help extracting better discriminative features with sparser hidden activation probabilities.