Video retrieval using high level features: exploiting query matching and confidence-based weighting
CIVR'06 Proceedings of the 5th international conference on Image and Video Retrieval
Relevance feedback: a power tool for interactive content-based image retrieval
IEEE Transactions on Circuits and Systems for Video Technology
Adaptive multiple feedback strategies for interactive video search
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Balancing thread based navigation for targeted video search
CIVR '08 Proceedings of the 2008 international conference on Content-based image and video retrieval
Foundations and Trends in Information Retrieval
Size matters! how thumbnail number, size, and motion influence mobile video retrieval
MMM'11 Proceedings of the 17th international conference on Advances in multimedia modeling - Volume Part II
VisionGo: Towards video retrieval with joint exploration of human and computer
Information Sciences: an International Journal
Cross media hyperlinking for search topic browsing
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Efficient targeted search using a focus and context video browser
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
Existing video research incorporates the use of relevance feedback based on user-dependent interpretations to improve the retrieval results. In this paper, we segregate the process of relevance feedback into 2 distinct facets: (a) recall-directed feedback; and (b) precision-directed feedback. The recall-directed facet employs general features such as text and high level features (HLFs) to maximize efficiency and recall during feedback, making it very suitable for large corpuses. The precision-directed facet on the other hand uses many other multimodal features in an active learning environment for improved accuracy. Combined with a performance-based adaptive sampling strategy, this process continuously re-ranks a subset of instances as the user annotates. Experiments done using TRECVID 2006 dataset show that our approach is efficient and effective.