A Robust Feature Matching Approach for Photography Originality Test

  • Authors:
  • Ce Gao;Yixu Song;Peifa Jia

  • Affiliations:
  • Tsinghua National Laboratory for Information Science and Technology, State Key Laboratory on Intelligent Technology and Systems, Department of Computer Science & Technology, Tsinghua University, B ...;Tsinghua National Laboratory for Information Science and Technology, State Key Laboratory on Intelligent Technology and Systems, Department of Computer Science & Technology, Tsinghua University, B ...;Tsinghua National Laboratory for Information Science and Technology, State Key Laboratory on Intelligent Technology and Systems, Department of Computer Science & Technology, Tsinghua University, B ...

  • Venue:
  • Journal of Mathematical Imaging and Vision
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Nowadays, with the help of photo processing software, it is easy to `create' a photo from other people's photography works. So, more and more unoriginal photography works have appeared in some photography contests. In order to protect the copyright and maintain fairness, it is crucial to recognize these plagiarisms. However, it is a difficult task because most plagiarisms have undergone copying, scaling, cropping and other processing. Even worse, most original copies don't have any digital watermarks on them.In this paper, we propose a novel learning-based feature matching approach to deal with this problem. It uses affine invariant features to identity bogus photos. First, we adopt an extremely fast algorithm to extract keypoints. Then, using color and texture representation, the keypoints that belong to different objects or background are clustered into corresponding groups. Next, based on the partition of the deformation space, a multilayer ferns model is trained to recognize local patches and get coarse pose estimations at the same time. At last, a linear predictor is adopted to refine the estimation, so as to get the accurate homography. We test our approach on several public datasets and a special dataset from national photography database. The experiment result demonstrates that our method can provide robust and powerful matching ability. Especially in some difficult matching conditions, in which other state-of-the-art methods can not yield good result, our approach also performs remarkably well. Furthermore, as there is no need to compute complicated descriptors, our method is very fast at run-time.