Using emotional context from article for contextual music recommendation

  • Authors:
  • Chih-Ming Chen;Ming-Feng Tsai;Jen-Yu Liu;Yi-Hsuan Yang

  • Affiliations:
  • National Chengchi University, Taipei, Taiwan Roc;National Chengchi University, Taipei, Taiwan Roc;Academia Sinica, Taipei, Taiwan Roc;Academia Sinica, Taipei, Taiwan Roc

  • Venue:
  • Proceedings of the 21st ACM international conference on Multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper proposes a context-aware approach that recommends music to a user based on the user's emotional state predicted from the article the user writes. We analyze the association between user-generated text and music by using a real-world dataset with user, text, music tripartite information collected from the social blogging website LiveJournal. The audio information represents various perceptual dimensions of music listening, including danceability, loudness, mode, and tempo; the emotional text information consists of bag-of-words and three dimensional affective states within an article: valence, arousal and dominance. To combine these factors for music recommendation, a factorization machine-based approach is taken. Our evaluation shows that the emotional context information mined from user-generated articles does improve the quality of recommendation, comparing to either the collaborative filtering approach or the content-based approach.