Combining document representations for known-item search
Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval
Discovering key concepts in verbose queries
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Evaluating verbose query processing techniques
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Towards a universal detector by mining concepts with small semantic gaps
Proceedings of the international conference on Multimedia
Representations of Keypoint-Based Semantic Concept Detection: A Comprehensive Study
IEEE Transactions on Multimedia
Hi-index | 0.00 |
We introduce a novel query-to-modality mapping approach to the TRECVid 2010 known-Item video search (KIS) task. To search for a specific target video, a KIS query is verbose with many multi-modal attributes. Issuing all search terms to a retrieval engine will confuse the search criteria in different modalities and result in "topic drift". We propose decomposing a KIS query into a set of short uni-modal subqueries and issue them to the search index of the corresponding modality features, such as text-based metadata, visualbased high-level features. To do so, we introduce novel syntactic query features and cast the query-to-modality mapping as a classification problem. Retrieval results on the TRECVid 2010 KIS dataset shows that our approach outperforms existing methods by a significant margin.