Information processing in dynamical systems: foundations of harmony theory
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Proceedings of the 27th annual conference on Computer graphics and interactive techniques
Training products of experts by minimizing contrastive divergence
Neural Computation
Style translation for human motion
ACM SIGGRAPH 2005 Papers
A fast learning algorithm for deep belief nets
Neural Computation
Restricted Boltzmann machines for collaborative filtering
Proceedings of the 24th international conference on Machine learning
Multifactor Gaussian process models for style-content separation
Proceedings of the 24th international conference on Machine learning
On the quantitative analysis of deep belief networks
Proceedings of the 25th international conference on Machine learning
Training restricted Boltzmann machines using approximations to the likelihood gradient
Proceedings of the 25th international conference on Machine learning
Non-linear latent factor models for revealing structure in high-dimensional data
Non-linear latent factor models for revealing structure in high-dimensional data
Learning Deep Architectures for AI
Foundations and Trends® in Machine Learning
Learning parametric dynamic movement primitives from multiple demonstrations
ICONIP'10 Proceedings of the 17th international conference on Neural information processing: theory and algorithms - Volume Part I
Two Distributed-State Models For Generating High-Dimensional Time Series
The Journal of Machine Learning Research
Comparing probabilistic models for melodic sequences
ECML PKDD'11 Proceedings of the 2011 European conference on Machine learning and knowledge discovery in databases - Volume Part III
A style controller for generating virtual human behaviors
The 10th International Conference on Autonomous Agents and Multiagent Systems - Volume 3
How to train your avatar: a data driven approach to gesture generation
IVA'11 Proceedings of the 10th international conference on Intelligent virtual agents
Real-time stylistic prediction for whole-body human motions
Neural Networks
Performatology: a procedural acting approach for interactive drama in cinematic games
ICIDS'11 Proceedings of the 4th international conference on Interactive Digital Storytelling
IVA'12 Proceedings of the 12th international conference on Intelligent Virtual Agents
Biometrics from gait using feature value method
AIMSA'12 Proceedings of the 15th international conference on Artificial Intelligence: methodology, systems, and applications
Customizing by doing for responsive video game characters
International Journal of Human-Computer Studies
Predicting time series of railway speed restrictions with time-dependent machine learning techniques
Expert Systems with Applications: An International Journal
Deep learning of representations: looking forward
SLSP'13 Proceedings of the First international conference on Statistical Language and Speech Processing
Training restricted Boltzmann machines: An introduction
Pattern Recognition
ACM Transactions on Reconfigurable Technology and Systems (TRETS)
Conditional restricted Boltzmann machines for negotiations in highly competitive and complex domains
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Journal of Visual Communication and Image Representation
Hi-index | 0.00 |
The Conditional Restricted Boltzmann Machine (CRBM) is a recently proposed model for time series that has a rich, distributed hidden state and permits simple, exact inference. We present a new model, based on the CRBM that preserves its most important computational properties and includes multiplicative three-way interactions that allow the effective interaction weight between two units to be modulated by the dynamic state of a third unit. We factor the three-way weight tensor implied by the multiplicative model, reducing the number of parameters from O(N3) to O(N2). The result is an efficient, compact model whose effectiveness we demonstrate by modeling human motion. Like the CRBM, our model can capture diverse styles of motion with a single set of parameters, and the three-way interactions greatly improve the model's ability to blend motion styles or to transition smoothly among them.