Probabilistic Horn abduction and Bayesian networks
Artificial Intelligence
Adaptive Probabilistic Networks with Hidden Variables
Machine Learning - Special issue on learning with probabilistic representations
Probabilistic Logic Programming and Bayesian Networks
ACSC '95 Proceedings of the 1995 Asian Computing Science Conference on Algorithms, Concurrency and Knowledge
TaskTracer: a desktop environment to support multi-tasking knowledge workers
Proceedings of the 10th international conference on Intelligent user interfaces
Learning probabilities for noisy first-order rules
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
UAI'97 Proceedings of the Thirteenth conference on Uncertainty in artificial intelligence
Parameter learning for relational Bayesian networks
Proceedings of the 24th international conference on Machine learning
MEBN: A language for first-order Bayesian knowledge bases
Artificial Intelligence
Learning Ground CP-Logic Theories by Leveraging Bayesian Network Learning Techniques
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
Learning first-order probabilistic models with combining rules
Annals of Mathematics and Artificial Intelligence
A relational hierarchical model for decision-theoretic assistance
ILP'07 Proceedings of the 17th international conference on Inductive logic programming
Learning Ground CP-Logic Theories by Leveraging Bayesian Network Learning Techniques
Fundamenta Informaticae - Progress on Multi-Relational Data Mining
Location-based reasoning about complex multi-agent behavior
Journal of Artificial Intelligence Research
Type Extension Trees for feature construction and learning in relational domains
Artificial Intelligence
Hi-index | 0.00 |
First-order probabilistic models allow us to model situations in which a random variable in the first-order model may have a large and varying numbers of parent variables in the ground ("unrolled") model. One approach to compactly describing such models is to independently specify the probability of a random variable conditioned on each individual parent (or small sets of parents) and then combine these conditional distributions via a combining rule (e.g., Noisy-OR). This paper presents algorithms for learning with combining rules. Specifically, algorithms based on gradient descent and expectation maximization are derived, implemented, and evaluated on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn the parameters of both the individual parent-target distributions and the combining rules.