The entropy of Markov trajectories

  • Authors:
  • L. Ekroot;T. M. Cover

  • Affiliations:
  • Jet Propulsion Lab., California Inst. of Technol., Pasadena, CA;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2006

Quantified Score

Hi-index 754.84

Visualization

Abstract

The idea of thermodynamic depth put forth by S. Lloyd and H. Pagels (1988) requires the computation of the entropy of Markov trajectories. Toward this end, the authors consider an irreducible finite state Markov chain with transition matrix P and associated entropy rate H(X)=-Σi,j μiPij log Pij where μ is the stationary distribution given by the solution of μ=μP. A trajectory Tij of the Markov chain is a path with initial state i, final state j, and no intervening states equal to j. It is shown that the entropy H(Tii) of the random trajectory originating and terminating in state i is given by H( Tii)=H(X)/μi. Thus the entropy of the random trajectory Tii is the product of the expected number of steps 1/μi to return to state i and the entropy rate H(X) per step for the stationary Markov chain. A general closed form solution for the entropies H(Tij) is given by H=K-K˜+HΔ, where H is the matrix of trajectory entropies Hij =H(Tij); K=(I-P +A)-1 (H*-HΔ); K˜ is a matrix in which the ijth element K˜ij equals the diagonal element Kjj of K; A is the matrix of stationary probabilities with entries A ij=μj; H* is the matrix of single-step entropies with entries H*ij=H(P i)=-Σk Pik log Pik; and HΔ is a diagonal matrix with entries (HΔ)ii=H (X)/μi