Entropy and the law of small numbers
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
On the Maximum Entropy Properties of the Binomial Distribution
IEEE Transactions on Information Theory
Monotonic convergence in an information-theoretic law of small numbers
IEEE Transactions on Information Theory
Monotonicity, thinning, and discrete versions of the entropy power inequality
IEEE Transactions on Information Theory
Hi-index | 0.12 |
Building on the recent work of Johnson (2007) and Yu (2008), we prove that entropy is a concave function with respect to the thinning operation Tα. That is, if X and Y are independent random variables on Z+ with ultra-log-concave probability mass functions, then H(TαX + T1-αY) ≥ αH(X) + (1 - α)H(Y), 0 ≤ α ≤ 1, where H denotes the discrete entropy. This is a discrete analogue of the inequality (h denotes the differential entropy) h(√αX + √1 - αY) ≥ αh(X) + (1 - α)h(Y), 0 ≤ α ≤ 1, which holds for continuous X and Y with finite variances and is equivalent to Shannon's entropy power inequality. As a consequence we establish a special case of a conjecture of Shepp and Olkin (1981). Possible extensions are also discussed.