DJogger: a mobile dynamic music device
CHI '06 Extended Abstracts on Human Factors in Computing Systems
An innovative three-dimensional user interface for exploring music collections enriched
MULTIMEDIA '06 Proceedings of the 14th annual ACM international conference on Multimedia
A music search engine built upon audio-based and web-based similarity measures
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
One-touch access to music on mobile devices
Proceedings of the 6th international conference on Mobile and ubiquitous multimedia
Music Recommendation and Discovery: The Long Tail, Long Fail, and Long Play in the Digital Music Space
Auralist: introducing serendipity into music recommendation
Proceedings of the fifth ACM international conference on Web search and data mining
Leveraging microblogs for spatiotemporal music information retrieval
ECIR'13 Proceedings of the 35th European conference on Advances in Information Retrieval
Hybrid retrieval approaches to geospatial music recommendation
Proceedings of the 36th international ACM SIGIR conference on Research and development in information retrieval
Affective music recommendation system using input images
ACM SIGGRAPH 2013 Posters
Location-aware music recommendation using auto-tagging and hybrid matching
Proceedings of the 7th ACM conference on Recommender systems
A survey of music similarity and recommendation from music context data
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
Successful music recommendation systems need to incorporate information on at least three levels: the music content, the music context, and the user context. The former refers to features derived from the audio signal; the second refers to aspects of the music or artist not encoded in the audio, nevertheless important to human music perception; the third refers to contextual aspects of the user which change dynamically. In this paper, we briefly review the well-researched categories of music content and music context features, before focusing on user-centric models, which have been neglected for a long time in music retrieval and recommendation approaches. In particular, we address the following tasks: (i) geospatial music recommendation from microblog data, (ii) user-aware music playlist generation on smart phones, and (iii) matching places of interest and music. The approaches presented for task (i) rely on large-scale data inferred from microblogs, motivated by the fact that social media represent an unprecedented source of information about every topic of our daily lives. Information about music items and artists is thus found in abundance in user-generated data. The questions of how to infer information relevant to music recommendation from microblogs and what to learn from them are discussed. So are different ways of incorporating this kind of information into state-of-the-art music recommendation algorithms. The presented approaches targeted at tasks (ii) and (iii) model the user in a more comprehensive way than just using information about her location and music listening habits. We report results of a user study aiming at investigating the relationship between music listening activity and a large set of contextual user features. Based on these, an intelligent mobile music player that automatically adapts the current playlist to the user context is presented. Eventually, we discuss different methods to solve task (iii), i.e., to determine music that suits a given place of interest, for instance, a major monument. In particular, we look into knowledge-based and tag-based methods to match music and places.