Video Google: A Text Retrieval Approach to Object Matching in Videos
ICCV '03 Proceedings of the Ninth IEEE International Conference on Computer Vision - Volume 2
Distinctive Image Features from Scale-Invariant Keypoints
International Journal of Computer Vision
Practical elimination of near-duplicates from web video search
Proceedings of the 15th international conference on Multimedia
Scalable mining of large video databases using copy detection
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Parallel Tracking and Mapping for Small AR Workspaces
ISMAR '07 Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality
Quality and efficiency in high dimensional nearest neighbor search
Proceedings of the 2009 ACM SIGMOD International Conference on Management of data
Real-time near-duplicate elimination for web video search with content and context
IEEE Transactions on Multimedia - Special issue on integration of context and content
An efficient near-duplicate video shot detection method using shot-based interest points
IEEE Transactions on Multimedia
Scale-rotation invariant pattern entropy for keypoint-based near-duplicate detection
IEEE Transactions on Image Processing
Monitoring near duplicates over video streams
Proceedings of the international conference on Multimedia
Machine learning for high-speed corner detection
ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part I
Hi-index | 0.00 |
With the exponential growth of the Web, real-time near-duplicate Web video identification is becoming more and more important due to its wide spectrum of applications including copyright detection and commercial monitoring. Though there has been significant research effort on efficiently identifying near-duplicates from large video collections, most of them use global features sensitive to photometric variations such as illumination direction, intensity, colors, and highlights. This paper proposes a novel local feature based approach in order to address the efficiency and scalability issues for near-duplicate Web video identification. Firstly, in order to represent the shot, we introduce a compact spatial signature generated with trajectories of the patches. And then, we construct an efficient data structure which indexes the spatial signatures to find the corresponding shots from query video. Finally, we adopt naive-Bayesian approach to estimate the near-duplicates from the set of corresponding shots. To demonstrate the effectiveness and efficiency of the proposed method, we evaluate its performance on an open Web video data set containing about 10K Web videos.