A musical mood trajectory estimation method using lyrics and acoustic features

  • Authors:
  • Naoki Nishikawa;Katsutoshi Itoyama;Hiromasa Fujihara;Masataka Goto;Tetsuya Ogata;Hiroshi G. Okuno

  • Affiliations:
  • Dept. of Intelligence Science and Technology, Grad. School of Informatics, Kyoto University, Kyoto, Japan;Dept. of Intelligence Science and Technology, Grad. School of Informatics, Kyoto University, Kyoto, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan;National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan;Dept. of Intelligence Science and Technology, Grad. School of Informatics, Kyoto University, Kyoto, Japan;Dept. of Intelligence Science and Technology, Grad. School of Informatics, Kyoto University, Kyoto, Japan

  • Venue:
  • MIRUM '11 Proceedings of the 1st international ACM workshop on Music information retrieval with user-centered and multimodal strategies
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we present a new method that represents an overall musical time-varying impression of a song by a pair of mood trajectories estimated from lyrics and audio signals. The mood trajectory of the lyrics is obtained by using the probabilistic latent semantic analysis (PLSA) to estimate topics (representing impressions) from words in the lyrics. The mood trajectory of the audio signals is estimated from acoustic features by using the multiple linear regression analysis. In our experiments, the mood trajectories of 100 songs in Last.fm's Best of 2010 were estimated. The detailed analysis of the 100 songs confirms that acoustic features provide more accurate mood trajectory and the 21% resulting mood trajectories are matched to realistic musical mood available at Last.fm.