A model for reasoning about persistence and causation
Computational Intelligence
An introduction to variational methods for graphical models
Learning in graphical models
Dynamic bayesian networks: representation, inference and learning
Dynamic bayesian networks: representation, inference and learning
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Numerical Recipes 3rd Edition: The Art of Scientific Computing
Graphical Models, Exponential Families, and Variational Inference
Foundations and Trends® in Machine Learning
Switching regulatory models of cellular stress response
Bioinformatics
Extending continuous time Bayesian networks
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
Continuous time particle filtering
IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
Learning continuous-time social network dynamics
UAI '09 Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
Continuous time bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Learning continuous time bayesian networks
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Fast MCMC sampling for Markov jump processes and extensions
The Journal of Machine Learning Research
Hi-index | 0.00 |
Continuous-time Bayesian networks is a natural structured representation language for multi-component stochastic processes that evolve continuously over time. Despite the compact representation provided by this language, inference in such models is intractable even in relatively simple structured networks. We introduce a mean field variational approximation in which we use a product of inhomogeneous Markov processes to approximate a joint distribution over trajectories. This variational approach leads to a globally consistent distribution, which can be efficiently queried. Additionally, it provides a lower bound on the probability of observations, thus making it attractive for learning tasks. Here we describe the theoretical foundations for the approximation, an efficient implementation that exploits the wide range of highly optimized ordinary differential equations (ODE) solvers, experimentally explore characterizations of processes for which this approximation is suitable, and show applications to a large-scale real-world inference problem.