Single-Bit Oversampled A/D Conversion with Exponential Accuracy in the Bit-Rate
DCC '00 Proceedings of the Conference on Data Compression
Effects of A-D conversion nonidealities on distributed sampling in dense sensor networks
Proceedings of the 5th international conference on Information processing in sensor networks
Complete characterization of stable bandlimited systems under quantization and thresholding
IEEE Transactions on Signal Processing
Distributed sampling for dense sensor networks: a "Bit-conservation principle"
IPSN'03 Proceedings of the 2nd international conference on Information processing in sensor networks
Behavior of the quantization operator for bandlimited, nonoversampled signals
IEEE Transactions on Information Theory
Entropy of highly correlated quantized data
IEEE Transactions on Information Theory
Unboundedness of thresholding and quantization for bandlimited signals
Signal Processing
Hi-index | 754.96 |
Accuracy of simple analog-to-digital conversion depends on both resolution of discretization in amplitude and resolution of discretization in time. For implementation convenience, high conversion accuracy is attained by refining the discretization in time using oversampling. It is commonly believed that oversampling adversely impacts rate-distortion properties of the conversion, since the bit rate, B, increases linearly with oversampling, resulting in a slow error decay in the bit rate, on the order of O(1/B). We demonstrate that the information obtained in the process of oversampled analog-to-digital conversion can easily be encoded in a manner which requires only a logarithmic increase of the bit rate with redundancy, achieving an exponential error decay in the bit rate