Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Fast exact multiplication by the Hessian
Neural Computation
Factorial Hidden Markov Models
Machine Learning - Special issue on learning with probabilistic representations
Learning nonlinear dynamical systems using an EM algorithm
Proceedings of the 1998 conference on Advances in neural information processing systems II
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Training products of experts by minimizing contrastive divergence
Neural Computation
UNSUPERVISED LEARNING OF DISTRIBUTIONS ON BINARY VECTORS USING TWO LAYER NETWORKS
UNSUPERVISED LEARNING OF DISTRIBUTIONS ON BINARY VECTORS USING TWO LAYER NETWORKS
Fields of Experts: A Framework for Learning Image Priors
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 2 - Volume 02
Estimation of Non-Normalized Statistical Models by Score Matching
The Journal of Machine Learning Research
A fast learning algorithm for deep belief nets
Neural Computation
Pattern Recognition and Machine Learning (Information Science and Statistics)
Pattern Recognition and Machine Learning (Information Science and Statistics)
Gaussian Process Dynamical Models for Human Motion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Extracting and composing robust features with denoising autoencoders
Proceedings of the 25th international conference on Machine learning
Graphical Models, Exponential Families, and Variational Inference
Graphical Models, Exponential Families, and Variational Inference
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Using fast weights to improve persistent contrastive divergence
ICML '09 Proceedings of the 26th Annual International Conference on Machine Learning
Dynamic Factor Graphs for Time Series Modeling
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part II
Training an active random field for real-time image denoising
IEEE Transactions on Image Processing
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Parameter learning with truncated message-passing
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Training restricted boltzmann machines with multi-tempering: harnessing parallelization
ICANN'12 Proceedings of the 22nd international conference on Artificial Neural Networks and Machine Learning - Volume Part II
Hi-index | 0.00 |
Imputing missing values in high dimensional time-series is a difficult problem. This paper presents a strategy for training energy-based graphical models for imputation directly, bypassing difficulties probabilistic approaches would face. The training strategy is inspired by recent work on optimization-based learning (Domke, 2012) and allows complex neural models with convolutional and recurrent structures to be trained for imputation tasks. In this work, we use this training strategy to derive learning rules for three substantially different neural architectures. Inference in these models is done by either truncated gradient descent or variational mean-field iterations. In our experiments, we found that the training methods outperform the Contrastive Divergence learning algorithm. Moreover, the training methods can easily handle missing values in the training data itself during learning. We demonstrate the performance of this learning scheme and the three models we introduce on one artificial and two real-world data sets.