A Multi-Scale Hybrid Linear Model for Lossy Image Representation

  • Authors:
  • Wei Hong;John Wright;Kun Huang;Yi Ma

  • Affiliations:
  • University of Illinois at Urbana-Champaign;University of Illinois at Urbana-Champaign;Ohio State University;University of Illinois at Urbana-Champaign

  • Venue:
  • ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision (ICCV'05) Volume 1 - Volume 01
  • Year:
  • 2005

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper introduces a simple and efficient representation for natural images. We partition an image into blocks and treat the blocks as vectors in a high-dimensional space. We then fit a piece-wise linear model (i.e. a union of affine subspaces) to the vectors at each down-sampling scale. We call this a multi-scale hybrid linear model of the image. The hybrid and hierarchical structure of this model allows us effectively to extract and exploit multi-modal correlations among the imagery data at different scales. It conceptually and computationally remedies limitations of many existing image representation methods that are based on either a fixed linear transformation (e.g. DCT, wavelets), an adaptive uni-modal linear transformation (e.g. PCA), or a multi-modal model at a single scale. We will justify both analytically and experimentally why and how such a simple multi-scale hybrid model is able to reduce simultaneously the model complexity and computational cost. Despite a small overhead for the model, our results show that this new model gives more compact representations for a wide variety of natural images under a wide range of signal-to-noise ratio than many existing methods, including wavelets.