Nonlinear optimization: complexity issues
Nonlinear optimization: complexity issues
JPEG 2000: Image Compression Fundamentals, Standards and Practice
JPEG 2000: Image Compression Fundamentals, Standards and Practice
MSE Optimal Bit Rate Allocation in the Application of JPEG2000 Part 2 to Meteorological Data
DCC '04 Proceedings of the Conference on Data Compression
Methods and Applications of Interval Analysis (SIAM Studies in Applied and Numerical Mathematics) (Siam Studies in Applied Mathematics, 2.)
Analysis of low bit rate image transform coding
IEEE Transactions on Signal Processing
Journal of Computational and Applied Mathematics - Special issue: Scientific computing, computer arithmetic, and validated numerics (SCAN 2004)
Hi-index | 0.00 |
The existing image and data compression techniques try to minimize the mean square deviation between the original data f(x,y,z) and the compressed-decompressed data $\widetilde f(x,y,z)$. In many practical situations, reconstruction that only guaranteed mean square error over the data set is unacceptable. For example, if we use the meteorological data to plan a best trajectory for a plane, then what we really want to know are the meteorological parameters such as wind, temperature, and pressure along the trajectory. If along this line, the values are not reconstructed accurately enough, the plane may crash – and the fact that on average, we get a good reconstruction, does not help. In general, what we need is a compression that guarantees that for each (x,y), the difference $|f(x,y,z)-\widetilde f(x,y,z)|$ is bounded by a given value Δ – i.e., that the actual value f(x,y,z) belongs to the interval$$[\widetilde f(x,y,z)-\Delta,\widetilde f(x,y,z)+\Delta].$$ In this paper, we describe new efficient techniques for data compression under such interval uncertainty.