Video watermarking using motion compensated 2D+t+2D filtering

  • Authors:
  • Deepayan Bhowmik;Charith Abhayaratne

  • Affiliations:
  • The University of Sheffield, Sheffield, United Kingdom;The University of Sheffield, Sheffield, United Kingdom

  • Venue:
  • Proceedings of the 12th ACM workshop on Multimedia and security
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Frame-by-frame video watermark embedding without considering motion, results in flicker and other motion mismatch artifacts in the watermarked video. Motion compensated temporal filtering (MCTF) provides a better framework for video watermarking by accounting object motion. However, depending on motion and texture characteristics of the video and the choice of spatial-temporal sub band for watermark embedding, MCTF has to be performed either on the spatial domain (t+2D) or in the wavelet domain (2D+t). In this work we propose improved video watermarking schemes by offering a generalized motion compensated 2D+t+2D framework for watermark embedding. An improved MCTF is used by modifying the MCTF update step to follow the motion trajectory in hierarchical temporal decomposition by using direct motion vector fields in the update step and implied motion vectors in the prediction step. The proposed 2D+t+2D framework with the modified MCTF-based watermarking shows better embedding distortion in terms of both mean square error and flicker metric for various combinations of spatial-temporal decompositions, compared to the existing frame-by-frame and t+2D domain video watermarking. The proposed scheme outperforms the conventional t+2D watermarking in terms of robustness performance, particularly for blind watermarking schemes where the motion is estimated from the watermarked video.