Computing conditional probabilities in large domains by maximizing renyi's quadratic entropy

  • Authors:
  • Charles Lawrence Zitnick, III;Takeo Kanade

  • Affiliations:
  • -;-

  • Venue:
  • Computing conditional probabilities in large domains by maximizing renyi's quadratic entropy
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this dissertation we discuss methods for efficiently approximating conditional probabilities in large domains by maximizing the entropy of the distribution given a set of constraints. The constraints are constructed from conditional probabilities, typically of low-order, that can be accurately computed from the training data. By appropriately choosing the constraints, maximum entropy methods can balance the tradeoffs in errors due to bias and variance. Standard maximum entropy techniques are too computationally inefficient for use in large domains in which the set of variables that are being conditioned upon varies. Instead of using the standard measure of entropy first proposed by Shannon, we use a measure that lies within the family of Rényi's entropies. If we allow our probability estimates to occasionally lie outside the range from 0 to 1, we can efficiently maximize Rényi's quadratic entropy relative to the constraints using a set of linear equations. We develop two algorithms, the inverse probability method and recurrent linear network, for maximizing Rényi's quadratic entropy without bounds. The algorithms produce identical results. However, depending on the type of problem, one method may be more computationally efficient than the other. We also propose an extension to the algorithms for partially enforcing the constraints based on our confidence in them. Our algorithms are tested on several applications including: collaborative filtering, image retrieval and language modeling.