Finding the Maximizers of the Information Divergence From an Exponential Family

  • Authors:
  • J. Rauh

  • Affiliations:
  • Max Planck Inst. for Math. in the Sci., Leipzig, Germany

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2011

Quantified Score

Hi-index 754.84

Visualization

Abstract

This paper investigates maximizers of the information divergence from an exponential family ε. It is shown that the rI -projection of a maximizer P to ε is a convex combination of P and a probability measure P- with disjoint support and the same value of the sufficient statistics A. This observation can be used to transform the original problem of maximizing D(·∥ε) over the set of all probability measures into the maximization of a function D̅r over a convex subset of ker A. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of D̅r yields all local maximizers of D(·∥E). This paper also proposes two algorithms to find the maximizers of D̅r and applies them to two examples, where the maximizers of D(·∥ε) were not known before.