Entropy and information theory
Entropy and information theory
Elements of information theory
Elements of information theory
The nature of statistical learning theory
The nature of statistical learning theory
Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks
Journal of Computer and System Sciences - Special issue: dedicated to the memory of Paris Kanellakis
Nonparametric Time Series Prediction Through Adaptive ModelSelection
Machine Learning
A Theory of Learning and Generalization
A Theory of Learning and Generalization
Adaptive Control Design and Analysis (Adaptive and Learning Systems for Signal Processing, Communications and Control Series)
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
Concept learning using complexity regularization
IEEE Transactions on Information Theory
A vector quantization approach to universal noiseless coding and quantization
IEEE Transactions on Information Theory
The minimum description length principle in coding and modeling
IEEE Transactions on Information Theory
On the redundancy of lossy source coding with abstract alphabets
IEEE Transactions on Information Theory
IEEE Transactions on Information Theory
Arbitrary source models and Bayesian codebooks in rate-distortion theory
IEEE Transactions on Information Theory
The empirical distribution of rate-constrained source codes
IEEE Transactions on Information Theory
Joint Fixed-Rate Universal Lossy Coding and Identification of Continuous-AlphabetMemoryless Sources
IEEE Transactions on Information Theory
Hi-index | 754.84 |
In this paper, we consider the problem of joint universal variable-rate lossy coding and identification for parametric classes of stationary β -mixing sources with general (Polish) alphabets. Compression performance is measured in terms of Lagrangians, while identification performance is measured by the variational distance between the true source and the estimated source. Provided that the sources are mixing at a sufficiently fast rate and satisfy certain smoothness and Vapnik-Chervonenkis (VC) learnability conditions, it is shown that, for bounded metric distortions, there exist universal schemes for joint lossy compression limd identification whose Lagrangian redundancies converge to zero as √Vn log n/n as the block length n tends to infinity, where Vn is the VC dimension of a certain class of decision regions defined by the n-dimensional marginal distributions of the sources; furthermore, for each n, the decoder can identify n-dimensional marginal of the active source up to a ball of radius O(√Vn log n/n) in variational distance, eventually with probability one. The results are supplemented by several examples of parametric sources satisfying the regularity conditions.