Experiments with Automatic Query Formulation in the Extended Boolean Model
TSD '09 Proceedings of the 12th International Conference on Text, Speech and Dialogue
CLEF'08 Proceedings of the 9th Cross-language evaluation forum conference on Evaluating systems for multilingual and multimodal information access
Overview of VideoCLEF 2009: new perspectives on speech-based multimedia content enrichment
CLEF'09 Proceedings of the 10th international conference on Cross-language evaluation forum: multimedia experiments
Combining word and phonetic-code representations for spoken document retrieval
CICLing'11 Proceedings of the 12th international conference on Computational linguistics and intelligent text processing - Volume Part II
Automatic tagging and geotagging in video collections and communities
Proceedings of the 1st ACM International Conference on Multimedia Retrieval
Hybrid and interactive domain-specific translation for multilingual access to digital libraries
NLP4DL'09/AT4DL'09 Proceedings of the 2009 international conference on Advanced language technologies for digital libraries
New metrics for meaningful evaluation of informally structured speech retrieval
ECIR'12 Proceedings of the 34th European conference on Advances in Information Retrieval
Spoken Content Retrieval: A Survey of Techniques and Technologies
Foundations and Trends in Information Retrieval
Query by babbling: a research agenda
Proceedings of the first workshop on Information and knowledge management for developing region
Penalty functions for evaluation measures of unsegmented speech retrieval
CLEF'12 Proceedings of the Third international conference on Information Access Evaluation: multilinguality, multimodality, and visual analytics
Hi-index | 0.00 |
The CLEF-2007 Cross-Language Speech Retrieval (CL-SR) track included two tasks: to identify topically coherent segments of English interviews in a known-boundary condition, and to identify time stamps marking the beginning of topically relevant passages in Czech interviews in an unknown-boundary condition. Six teams participated in the English evaluation, performing both monolingual and cross-language searches of ASR transcripts, automatically generated metadata, and manually generated metadata. Four teams participated in the Czech evaluation, performing monolingual searches of automatic speech recognition transcripts.