Information Retrieval
Evaluation campaigns and TRECVid
MIR '06 Proceedings of the 8th ACM international workshop on Multimedia information retrieval
Interactive retrieval of video sequences from local feature dynamics
AMR'05 Proceedings of the Third international conference on Adaptive Multimedia Retrieval: user, context, and feedback
Hi-index | 0.00 |
The benchmarking of various methods of video analysis and indexing has become a research problem per se. While a lot of effort nowadays is devoted to organization of evaluation campaigns both on international, such as TREC Video and national levels, an in-depth analysis of the performance of the methods still remains an open issue. Several aspects in the evaluation of methods have to be addressed: how to design a benchmarking corpus in order to cover a sufficiently wide range of applications, what are the tasks to address, what are the most relevant evaluation metrics. The Argos evaluation campaign supported by the French Techno-Vision program aimed at developing resources for a benchmarking of video content analysis and indexing methods. The paper describes the type of tasks evaluated, the way the content set has been produced, metrics and tools developed for the evaluations and some of the results obtained at the end of the first phase. Perspectives based on current works will be given to conclude this paper.