VideoQ: an automated content based video search system using visual cues
MULTIMEDIA '97 Proceedings of the fifth ACM international conference on Multimedia
ACM Computing Surveys (CSUR)
Similarity Search for Multidimensional Data Sequences
ICDE '00 Proceedings of the 16th International Conference on Data Engineering
Fast video matching with signature alignment
MIR '03 Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval
Video Sequence Matching with Spatio-Temporal Constraints
ICPR '04 Proceedings of the Pattern Recognition, 17th International Conference on (ICPR'04) Volume 3 - Volume 03
Robust and fast similarity search for moving object trajectories
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
Towards effective indexing for very large video sequence database
Proceedings of the 2005 ACM SIGMOD international conference on Management of data
Efficient similarity search by summarization in large video database
ADC '07 Proceedings of the eighteenth conference on Australasian database - Volume 63
On the marriage of Lp-norms and edit distance
VLDB '04 Proceedings of the Thirtieth international conference on Very large data bases - Volume 30
Efficient video similarity measurement with video signature
IEEE Transactions on Circuits and Systems for Video Technology
An efficient near-duplicate video shot detection method using shot-based interest points
IEEE Transactions on Multimedia
Hi-index | 0.00 |
Near-duplicate video clip(NDVC) detection is a special issue of content-based video search. Identifying the videos derived from the same original source is the primary task of this research. In NDVC detection, an important step is to define an effective similarity measure that captures both frame and sequence information inherent to the video clips. To address this, in this paper, we propose a new similarity measure, named as Video Edit Distance(VED), that adopts a complementary information compensation scheme based on the visual features and sequence context of videos. Visual features contain the discriminative information of each video, and sequence context captures the feature variation of it. To reduce the computation cost of inter-video comparison by VED, we extract key frames from video sequences and map each key frame into one single symbol. Various techniques are proposed to compensate the information loss in the measurement. Experimental results demonstrate that the proposed measure is highly effective.