Training products of experts by minimizing contrastive divergence
Neural Computation
UNSUPERVISED LEARNING OF DISTRIBUTIONS ON BINARY VECTORS USING TWO LAYER NETWORKS
UNSUPERVISED LEARNING OF DISTRIBUTIONS ON BINARY VECTORS USING TWO LAYER NETWORKS
A fast learning algorithm for deep belief nets
Neural Computation
Extracting and composing robust features with denoising autoencoders
Proceedings of the 25th international conference on Machine learning
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Dynamic Factor Graphs for Time Series Modeling
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Hi-index | 0.00 |
Imputing missing values in high dimensional time series is a difficult problem. There have been some approaches to the problem [11,8] where neural architectures were trained as probabilistic models of the data. However, we argue that this approach is not optimal. We propose to view temporal neural networks with latent variables as energy-based models and train them for missing value recovery directly. In this paper we introduce two energy-based models. The first model is based on a one dimensional convolution and the second model utilizes a recurrent neural network. We demonstrate how ideas from the energy-based learning framework can be used to train these models to recover missing values. The models are evaluated on a motion capture dataset.