Synthesizing processed video by filtering temporal relationships

  • Authors:
  • R. Rajagopalan;M. T. Orchard

  • Affiliations:
  • Emuzed Inc., Fremont, CA;-

  • Venue:
  • IEEE Transactions on Image Processing
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

Temporal relationships (motion fields) have been widely exploited by researchers for video processing. Their primary use has been to group pixels in spatiotemporal neighborhoods. Typically, video processing is achieved by filtering, modeling, or analyzing pixels in these neighborhoods. In spite of the widespread use of motion information to process video, rarely are the fields treated as signals, i.e., the temporal relationships are seldom considered as a distinct time series. A notable exception is the generalized autoregressive modeling of these relationships in Rajagopalan et al. (1997). In this work, we present a generalization of finite impulse response filtering applicable to temporal relationships and continue the spirit of that work of treating motion fields as a distinct signal (albeit one that is closely tied to the pixel intensities). Applications presented are preprocessing of video for coding and for noise reduction. Instead of filtering pixels in spatiotemporal neighborhoods directly, we argue that it may be more beneficial to filter the temporal relationships first and then synthesize processed video. Simulations shows MPEG-1 rate gains of up to 20% for coding processed video compared to unprocessed ones where processing leaves the original perceptually unchanged. Noise reduction experiments demonstrate a gain of 0.5 dB at high signal to noise ratios over the best results in the published literature while at low to moderate SNRs, improvements are 0.3 dB lower