Universal sequential learning and decision from individual data sequences
COLT '92 Proceedings of the fifth annual workshop on Computational learning theory
Finite state verifiers I: the power of interaction
Journal of the ACM (JACM)
Universal Finite Memory Machines for Coding Binary Sequences
DCC '00 Proceedings of the Conference on Data Compression
Complexity analysis of adaptive binary arithmetic coding software implementations
NEW2AN'11/ruSMART'11 Proceedings of the 11th international conference and 4th international conference on Smart spaces and next generation wired/wireless networking
Statistical estimation with bounded memory
Statistics and Computing
Hi-index | 754.84 |
Let{X_{i}}_{i=1}^{infty}be a sequence of independent Bernoulli random variables with probabilitypthatX_{i} = 1and probabilityq=1-pthatX_{i} = 0for alli geq 1. Time-invariant finite-memory (i.e., finite-state) estimation procedures for the parameter p are considered which takeX_{1}, cdotsas an input sequence. In particular, an n-state deterministic estimation procedure is described which can estimate p with mean-square errorO(log n/n)and ann-state probabilistic estimation procedure which can estimatepwith mean-square errorO(1/n). It is proved that theO(1/n)bound is optimal to within a constant factor. In addition, it is shown that linear estimation procedures are just as powerful (up to the measure of mean-square error) as arbitrary estimation procedures. The proofs are based on an analog of the well-known matrix tree theorem that is called the Markov chain tree theorem.