Degrees of Belief, Random Worlds, and Maximum Entropy

  • Authors:
  • Joseph Y. Halpern

  • Affiliations:
  • -

  • Venue:
  • DS '00 Proceedings of the Third International Conference on Discovery Science
  • Year:
  • 2000

Quantified Score

Hi-index 0.00

Visualization

Abstract

Consider a doctor with a knowledge base KB consisting of first-order information (such as "All patients with hepatitis have jaundice"), statistical information (such as "80have hepatitis"), and default information (such as "patients with pneumonia typically have fever"). The doctor may want to make decisions regarding a particular patient, using the KB in some principled way. To do this, it is often useful for the doctor to assign a numerical "degree of belief" to measure the strength of her belief in a given statement A. I focus on one principled method for doing so. The method, called the random worlds method, is a natural one: For any given domain size N, we can look at the proportion of models satisfying A among models of size N satisfying KB. If we don't know the domain size N, but know that it is large, we can approximate the degree of belief in A given KB by taking the limit of this fraction as N goes to infinity. In many cases that arise in practice, the answers we get using this method can be shown to match heuristic assumptions made in many standard AI systems. I also show that when the language is restricted to unary predicates (for example, symptoms and diseases, but not relations such as "Taller than"), the answer provided by the random worlds method can often be computed using maximum entropy. On the other hand, if the language includes binary predicates, all connections to maximum entropy seem to disappear. Moreover, almost all the questions one might want to ask can be shown to be highly undecidable. I conclude with some general discussion of the problem of finding reasonable methods to do inductive reasoning of the sort considered here, and the relevance of these ideas to data mining and knowledge discovery. The talk covers joint work with Fahiem Bacchus, Adam Grove and Daphne Koller [1,2].