Bucket elimination: a unifying framework for probabilistic inference
Learning in graphical models
Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach
AutoBayes: a system for generating data analysis programs from statistical models
Journal of Functional Programming
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Introduction to Statistical Relational Learning (Adaptive Computation and Machine Learning)
Report on the probabilistic language scheme
Proceedings of the 2007 symposium on Dynamic languages
Case-factor diagrams for structured probabilistic modeling
Journal of Computer and System Sciences
A glimpse of symbolic-statistical modeling by PRISM
Journal of Intelligent Information Systems
Effective Bayesian inference for stochastic programs
AAAI'97/IAAI'97 Proceedings of the fourteenth national conference on artificial intelligence and ninth conference on Innovative applications of artificial intelligence
Measure transformer semantics for Bayesian machine learning
ESOP'11/ETAPS'11 Proceedings of the 20th European conference on Programming languages and systems: part of the joint European conferences on theory and practice of software
From Bayesian notation to pure racket via discrete measure-theoretic probability in λZFC
IFL'10 Proceedings of the 22nd international conference on Implementation and application of functional languages
Delimited control in OCaml, abstractly and concretely: system description
FLOPS'10 Proceedings of the 10th international conference on Functional and Logic Programming
Delimited control in OCaml, abstractly and concretely
Theoretical Computer Science
A model-learner pattern for bayesian reasoning
POPL '13 Proceedings of the 40th annual ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Hi-index | 0.00 |
Probabilistic programming languages and modeling toolkits are two modular ways to build and reuse stochastic models and inference procedures. Combining strengths of both, we express models and inference as generalized coroutines in the same general-purpose language. We use existing facilities of the language, such as rich libraries, optimizing compilers, and types, to develop concise, declarative, and realistic models with competitive performance on exact and approximate inference. In particular, a wide range of models can be expressed using memoization. Because deterministic parts of models run at full speed, custom inference procedures are trivial to incorporate, and inference procedures can reason about themselves without interpretive overhead. Within this framework, we introduce a new, general algorithm for importance sampling with look-ahead.