Learning first-order probabilistic models with combining rules

  • Authors:
  • Sriraam Natarajan;Prasad Tadepalli;Eric Altendorf;Thomas G. Dietterich;Alan Fern;Angelo Restificar

  • Affiliations:
  • Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR;Oregon State University, Corvallis, OR

  • Venue:
  • ICML '05 Proceedings of the 22nd international conference on Machine learning
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

First-order probabilistic models allow us to model situations in which a random variable in the first-order model may have a large and varying numbers of parent variables in the ground ("unrolled") model. One approach to compactly describing such models is to independently specify the probability of a random variable conditioned on each individual parent (or small sets of parents) and then combine these conditional distributions via a combining rule (e.g., Noisy-OR). This paper presents algorithms for learning with combining rules. Specifically, algorithms based on gradient descent and expectation maximization are derived, implemented, and evaluated on synthetic data and on a real-world task. The results demonstrate that the algorithms are able to learn the parameters of both the individual parent-target distributions and the combining rules.