WordNet: a lexical database for English
Communications of the ACM
MARSYAS: a framework for audio analysis
Organised Sound
MARSYAS: a framework for audio analysis
Organised Sound
Topic-bridged PLSA for cross-domain text classification
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Multimodal Music Mood Classification Using Audio and Lyrics
ICMLA '08 Proceedings of the 2008 Seventh International Conference on Machine Learning and Applications
How do you feel about "dancing queen"?: deriving mood & theme annotations from user tags
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Feature selection for content-based, time-varying musical emotion regression
Proceedings of the international conference on Multimedia information retrieval
Probabilistic latent semantic analysis
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
MM '11 Proceedings of the 19th ACM international conference on Multimedia
Hi-index | 0.00 |
In this paper, we present a new method that represents an overall musical time-varying impression of a song by a pair of mood trajectories estimated from lyrics and audio signals. The mood trajectory of the lyrics is obtained by using the probabilistic latent semantic analysis (PLSA) to estimate topics (representing impressions) from words in the lyrics. The mood trajectory of the audio signals is estimated from acoustic features by using the multiple linear regression analysis. In our experiments, the mood trajectories of 100 songs in Last.fm's Best of 2010 were estimated. The detailed analysis of the 100 songs confirms that acoustic features provide more accurate mood trajectory and the 21% resulting mood trajectories are matched to realistic musical mood available at Last.fm.