A model of textual affect sensing using real-world knowledge
Proceedings of the 8th international conference on Intelligent user interfaces
Style mining of electronic messages for multiple authorship discrimination: first results
Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining
Semisupervised learning from different information sources
Knowledge and Information Systems
Opinion Mining and Sentiment Analysis
Foundations and Trends in Information Retrieval
Combination of audio and lyrics features for genre classification in digital audio collections
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Multimodal Music Mood Classification Using Audio and Lyrics
ICMLA '08 Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications
How do you feel about "dancing queen"?: deriving mood & theme annotations from user tags
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Automatic mood detection and tracking of music audio signals
IEEE Transactions on Audio, Speech, and Language Processing
Affect analysis of text using fuzzy semantic typing
IEEE Transactions on Fuzzy Systems
Towards a new reading experience via semantic fusion of text and music
Proceedings of the 11th annual international ACM/IEEE joint conference on Digital libraries
Exploiting online music tags for music emotion classification
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP) - Special section on ACM multimedia 2010 best paper candidates, and issue on social media
The need for music information retrieval with user-centered and multimodal strategies
MIRUM '11 Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies
Generating ground truth for music mood classification using mechanical turk
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Unsupervised tagging of spanish lyrics dataset using clustering
MLDM'13 Proceedings of the 9th international conference on Machine Learning and Data Mining in Pattern Recognition
Hi-index | 0.00 |
Mood is an emerging metadata type and access point in music digital libraries (MDL) and online music repositories. In this study, we present a comprehensive investigation of the usefulness of lyrics in music mood classification by evaluating and comparing a wide range of lyric text features including linguistic and text stylistic features. We then combine the best lyric features with features extracted from music audio using two fusion methods. The results show that combining lyrics and audio significantly outperformed systems using audio-only features. In addition, the examination of learning curves shows that the hybrid lyric + audio system needed fewer training samples to achieve the same or better classification accuracies than systems using lyrics or audio singularly. These experiments were conducted on a unique large-scale dataset of 5,296 songs (with both audio and lyrics for each) representing 18 mood categories derived from social tags. The findings push forward the state-of-the-art on lyric sentiment analysis and automatic music mood classification and will help make mood a practical access point in music digital libraries.