Joint Scene and Signal Modeling for Wavelet-Based VideoCoding with Cellular Neural Network Architecture

  • Authors:
  • Chang Wen Chen;Jiebo Luo;Lulin Chen;Kevin J. Parker

  • Affiliations:
  • Department of Electrical Engineering, University of Missouri—Columbia, Columbia, MO 65211;Department of Electrical Engineering, University of Rochester, Rochester, NY 14627-0231;Department of Electrical Engineering, University of Rochester, Rochester, NY 14627-0231;Department of Electrical Engineering, University of Rochester, Rochester, NY 14627-0231

  • Venue:
  • Journal of VLSI Signal Processing Systems - Special issue on recent development in video: algorithms, implementation and applications
  • Year:
  • 1997

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a joint scene and signal modeling for the designof an adaptive quantization scheme applied to the waveletcoefficients in subband video coding applications. The joint modelingincludes two integrated components: the scene modeling characterizedby the neighborhood binding with Gibbs random field and the signalmodeling characterized by the matching of the wavelet coefficientdistribution. With this joint modeling, the quantization becomesadaptive to not only wavelet coefficient signal distribution but alsothe prominent image scene structures. The proposed quantizationscheme based on the joint scene and signal modeling is accomplishedthrough adaptive clustering with spatial neighborhood constraints.Such spatial constraint allows the quantization to shift its bitallocation, if necessary, to those perceptually more importantcoefficients so that the preservation of scene structure can beachieved. This joint modeling enables the quantization to reachbeyond the limit of the traditional statistical signal modeling-basedapproaches which often lack scene adaptivity. Furthermore, thedynamically enforced spatial constraints of the Gibbs random fieldare able to overcome the shortcomings of the artificial blockdivision which are usually the major source of distortion when thevideo is coded by block-based approaches at low bit rate. Inaddition, we introduce a cellular neural network architecture for thehardware implementation of this proposed adaptive quantization. Weprove that this cellular neural network does converge to the desiredsteady state with the suggested update scheme. The adaptivequantization scheme based on the joint scene and signal modeling hasbeen successfully applied to videoconferencing application and veryfavorable results have been obtained. We believe that this jointmodeling-based video coding will have an impact on many otherapplications because it is able to simultaneously perform signaladaptive and scene adaptive quantization.