WWW '05 Special interest tracks and posters of the 14th international conference on World Wide Web
Content-based multimedia information retrieval: State of the art and challenges
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Live speaker identification in conversations
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Foundations and Trends in Information Retrieval
LIBSVM: A library for support vector machines
ACM Transactions on Intelligent Systems and Technology (TIST)
Acoustic super models for large scale video event detection
J-MRE '11 Proceedings of the 2011 joint ACM workshop on Modeling and representing events
Audio Keywords Discovery for Text-Like Audio Content Analysis and Retrieval
IEEE Transactions on Multimedia
There is no data like less data: percepts for video concept detection on consumer-produced media
Proceedings of the 2012 ACM international workshop on Audio and multimedia methods for large-scale video analysis
Hi-index | 0.00 |
A video's soundtrack is usually highly correlated to its content. Hence, audio-based techniques have recently emerged as a means for video concept detection complementary to visual analysis. Most state-of-the-art approaches rely on manual definition of predefined sound concepts such as "ngine sounds," "utdoor/indoor sounds." These approaches come with three major drawbacks: manual definitions do not scale as they are highly domain-dependent, manual definitions are highly subjective with respect to annotators and a large part of the audio content is omitted since the predefined concepts are usually found only in a fraction of the soundtrack. This paper explores how unsupervised audio segmentation systems like speaker diarization can be adapted to automatically identify low-level sound concepts similar to annotator defined concepts and how these concepts can be used for audio indexing. Speaker diarization systems are designed to answer the question "ho spoke when?"by finding segments in an audio stream that exhibit similar properties in feature space, i.e., sound similar. Using a diarization system, all the content of an audio file is analyzed and similar sounds are clustered. This article provides an in-depth analysis on the statistic properties of similar acoustic segments identified by the diarization system in a predefined document set and the theoretical fitness of this approach to discern one document class from another. It also discusses how diarization can be tuned in order to better reflect the acoustic properties of general sounds as opposed to speech and introduces a proof-of-concept system for multimedia event classification working with diarization-based indexing.