Distributed source coding without Slepian-Wolf compression
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Source coding with a side information 'vending machine' at the decoder
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Two lossy source coding problems with causal side-information
ISIT'09 Proceedings of the 2009 IEEE international conference on Symposium on Information Theory - Volume 2
Coding and common reconstruction
IEEE Transactions on Information Theory
On successive refinement for the Kaspi/Heegard-Berger problem
IEEE Transactions on Information Theory
Hi-index | 754.96 |
We characterize the rate distortion function for the source coding with decoder side information setting when the ith reconstruction symbol is allowed to depend only on the first i+lscr side information symbols, for some finite look-ahead lscr, in addition to the index from the encoder. For the case of causal side information, i.e., lscr=0, we find that the penalty of causality is the omission of the subtracted mutual information term in the Wyner-Ziv rate distortion function. For lscr>0, we derive a computable "infinite-letter" expression for the rate distortion function. When specialized to the near-lossless case, our results characterize the best achievable rate for the Slepian-Wolf source coding problem with finite side information looka-head, and have some surprising implications. We find that side information is useless for any fixed lscr when the joint probability mass function (PMF) of the source and side information satisfies the positivity condition P(x,y)>0 for all (x,y). More generally, the optimal rate depends on the distribution of the pair X,Y only through the distribution of X and the bipartite graph whose edges represent the pairs x,y for which P(x,y)>0. On the other hand, if side information look-ahead is allowed to grow faster than logarithmic in the block length, then H(X|Y) is achievable. Finally, we apply our approach to derive a computable expression for channel capacity when state information is available at the encoder with limited look-ahead.