CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
MIRACLE at VideoCLEF 2008: topic identification and keyframe extraction in dual language videos
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Using an information retrieval system for video classification
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
VideoCLEF 2008: ASR classification with Wikipedia categories
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Metadata and multilinguality in video classification
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Overview of VideoCLEF 2009: new perspectives on speech-based multimedia content enrichment
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Overview of VideoCLEF 2009: new perspectives on speech-based multimedia content enrichment
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Hi-index | 0.00 |
This paper describes experiments we conducted in conjunction with the VideoCLEF 2009 classification task. In our second participation in the task we experimented with treating classification as an IR problem and used the Xtrieval framework [1] to run our experiments. We confirmed that the IR approach achieves strong results although the data set was changed. We proposed an automatic threshold to limit the number of labels per document. Query expansion performed better than the corresponding baseline experiments in terms of mean average precision. We also found that combining the ASR transcriptions and the archival metadata improved the classification performance unless query expansion was used.