Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
An Experimental Comparison of Min-Cut/Max-Flow Algorithms for Energy Minimization in Vision
IEEE Transactions on Pattern Analysis and Machine Intelligence
Representing shape with a spatial pyramid kernel
Proceedings of the 6th ACM international conference on Image and video retrieval
Exposing digital forgeries in video by detecting duplication
Proceedings of the 9th workshop on Multimedia & security
Detecting Photographic Composites of People
IWDW '07 Proceedings of the 6th International Workshop on Digital Watermarking
An efficient and robust method for detecting copy-move forgery
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Detecting photographic composites using shadows
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Hi-index | 0.00 |
In this paper, we propose a novel method to automatically detect and segment the duplicate regions within an image. Our method takes three steps: 1) detect and locate the duplicate region pair using the modified Efficient Subwindow Search algorithm (ESS), 2) segment duplicate regions using planar homography constraint, and 3) differentiate the tampered region from the authentic one through analysing their contours. The contribution of our method is three-fold: First, we generalize duplication from traditional pure copy-paste, which involves only translation, to more general cases, which involves planar homography transformation (for example, scale and rotation). Second, as for the simple pure translation cases, the time complexity is reduced from best reported O(PlogP) to O(P), where P is the number of pixels in the image. Third, our method is also capable to detect multiple duplications in one image. Performances of our method are evaluated on the INRIA Annotations for Graz-02 dataset (IG02) and experiment results demonstrate that our method reaches pleasing precision and recall as 93.5% and 82.7%, respectively.