Automatic text processing: the transformation, analysis, and retrieval of information by computer
Automatic text processing: the transformation, analysis, and retrieval of information by computer
An overview of audio information retrieval
Multimedia Systems - Special issue on audio and multimedia
Content-based organization and visualization of music archives
Proceedings of the tenth ACM international conference on Multimedia
MARSYAS: a framework for audio analysis
Organised Sound
MARSYAS: a framework for audio analysis
Organised Sound
Natural language processing of lyrics
Proceedings of the 13th annual ACM international conference on Multimedia
Multimodal content-based structure analysis of karaoke music
Proceedings of the 13th annual ACM international conference on Multimedia
An innovative three-dimensional user interface for exploring music collections enriched
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Syllabic level automatic synchronization of music signals and text lyrics
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
Psychoacoustics: Facts and Models
Psychoacoustics: Facts and Models
Music retrieval: a tutorial and review
Foundations and Trends in Information Retrieval
Integration of text and audio features for genre classification in music information retrieval
ECIR'07 Proceedings of the 29th European conference on IR research
Hierarchical organization and description of music collections at the artist level
ECDL'05 Proceedings of the 9th European conference on Research and Advanced Technology for Digital Libraries
SMC'09 Proceedings of the 2009 IEEE international conference on Systems, Man and Cybernetics
Improving automatic music genre classification with hybrid content-based feature vectors
Proceedings of the 2010 ACM Symposium on Applied Computing
Improving mood classification in music digital libraries by combining lyrics and audio
Proceedings of the 10th annual joint conference on Digital libraries
Feature selection in a cartesian ensemble of feature subspace classifiers for music categorisation
Proceedings of 3rd international workshop on Machine learning and music
The need for music information retrieval with user-centered and multimodal strategies
MIRUM '11 Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies
Hi-index | 0.00 |
In many areas multimedia technology has made its way into mainstream. In the case of digital audio this is manifested in numerous online music stores having turned into profitable businesses. The widespread user adaption of digital audio both on home computers and mobile players show the size of this market. Thus, ways to automatically process and handle the growing size of private and commercial collections become increasingly important; along goes a need to make music interpretable by computers. The most obvious representation of audio files is their sound - there are, however, more ways of describing a song, for instance its lyrics, which describe songs in terms of content words. Lyrics of music may be orthogonal to its sound, and differ greatly from other texts regarding their (rhyme) structure. Consequently, the exploitation of these properties has potential for typical music information retrieval tasks such as musical genre classification; so far, there is a lack of means to efficiently combine these modalities. In this paper, we present findings from investigating advanced lyrics features such as the frequency of certain rhyme patterns, several parts-of-speech features, and statistic features such as words per minute (WPM). We further analyse in how far a combination of these features with existing acoustic feature sets can be exploited for genre classification and provide experiments on two test collections.