Information Theory: Coding Theorems for Discrete Memoryless Systems
Information Theory: Coding Theorems for Discrete Memoryless Systems
Universal simulation with fidelity criteria
IEEE Transactions on Information Theory
Simulation of random processes and rate-distortion theory
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Interval algorithm for random number generation
IEEE Transactions on Information Theory
The method of types [information theory]
IEEE Transactions on Information Theory
Channel simulation by interval algorithm: a performance analysis of interval algorithm
IEEE Transactions on Information Theory
Separation of random number generation and resolvability
IEEE Transactions on Information Theory
On universal simulation of information sources using training data
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Linear time universal coding and time reversal of tree sources via FSM closure
IEEE Transactions on Information Theory
Addendum to “On Universal Simulation of Information Sources Using Training Data”
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Universal Delay-Limited Simulation
IEEE Transactions on Information Theory
A universal finite memory source
IEEE Transactions on Information Theory
Hi-index | 754.84 |
The problem of universal simulation given a training sequence is studied both in a stochastic setting and for individual sequences. In the stochastic setting, the training sequence is assumed to be emitted by a Markov source of unknown order, extending previous work where the order is assumed known and leading to the notion of twice-universal simulation. A simulation scheme, which partitions the set of sequences of a given length into classes, is proposed for this setting and shown to be asymptotically optimal. This partition extends the notion of type classes to the twice-universal setting. In the individual sequence scenario, the same simulation scheme is shown to generate sequences which are statistically similar, in a strong sense, to the training sequence, for statistics of any order, while essentially maximizing the uncertainty on the output.