MusicSense: contextual music recommendation using emotional allocation modeling
Proceedings of the 15th international conference on Multimedia
Collaborative filtering with temporal dynamics
Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining
Exploring automatic music annotation with "acoustically-objective" tags
Proceedings of the international conference on Multimedia information retrieval
Short text classification in twitter to improve information filtering
Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval
Exploring the music similarity space on the web
ACM Transactions on Information Systems (TOIS)
Probabilistic factor models for web site recommendation
Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval
Factorization Machines with libFM
ACM Transactions on Intelligent Systems and Technology (TIST)
TFMAP: optimizing MAP for top-n context-aware recommendation
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Collaborative personalized tweet recommendation
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Social contextual recommendation
Proceedings of the 21st ACM international conference on Information and knowledge management
Social recommendation across multiple relational domains
Proceedings of the 21st ACM international conference on Information and knowledge management
Hi-index | 0.00 |
This paper proposes a context-aware approach that recommends music to a user based on the user's emotional state predicted from the article the user writes. We analyze the association between user-generated text and music by using a real-world dataset with user, text, music tripartite information collected from the social blogging website LiveJournal. The audio information represents various perceptual dimensions of music listening, including danceability, loudness, mode, and tempo; the emotional text information consists of bag-of-words and three dimensional affective states within an article: valence, arousal and dominance. To combine these factors for music recommendation, a factorization machine-based approach is taken. Our evaluation shows that the emotional context information mined from user-generated articles does improve the quality of recommendation, comparing to either the collaborative filtering approach or the content-based approach.