A model for reasoning about persistence and causation
Computational Intelligence
Learning and inferring transportation routines
AAAI'04 Proceedings of the 19th national conference on Artifical intelligence
Continuous time bayesian networks
UAI'02 Proceedings of the Eighteenth conference on Uncertainty in artificial intelligence
Learning continuous time bayesian networks
UAI'03 Proceedings of the Nineteenth conference on Uncertainty in Artificial Intelligence
Extracting Places and Activities from GPS Traces Using Hierarchical Conditional Random Fields
International Journal of Robotics Research
Continuous Time Bayesian Networks for Host Level Network Intrusion Detection
ECML PKDD '08 Proceedings of the European conference on Machine Learning and Knowledge Discovery in Databases - Part II
CTPPL: a continuous time probabilistic programming language
IJCAI'09 Proceedings of the 21st international jont conference on Artifical intelligence
The use of hidden semi-Markov models in clinical diagnosis maze tasks
Intelligent Data Analysis
PutMode: prediction of uncertain trajectories in moving objects databases
Applied Intelligence
Mean Field Variational Approximation for Continuous-Time Bayesian Networks
The Journal of Machine Learning Research
Hi-index | 0.00 |
Continuous-time Bayesian networks (CTBNs) (Nodelman, Shelton, & Koller 2002; 2003), are an elegant modeling language for structured stochastic processes that evolve over continuous time. The CTBN framework is based on homogeneous Markov processes, and defines two distributions with respect to each local variable in the system, given its parents: an exponential distribution over when the variable transitions, and a multinomial over what is the next value. In this paper, we present two extensions to the framework that make it more useful in modeling practical applications. The first extension models arbitrary transition time distributions using Erlang-Coxian approximations, while maintaining tractable learning. We show how the censored data problem arises in learning the distribution, and present a solution based on expectation-maximization initialized by the Kaplan-Meier estimate. The second extension is a general method for reasoning about negative evidence, by introducing updates that assert no observable events occur over an interval of time. Such updates were not defined in the original CTBN framework, and we show that their inclusion can significantly improve the accuracy of filtering and prediction. We illustrate and evaluate these extensions in two real-world domains, email use and GPS traces of a person traveling about a city.