Crowdsourcing user studies with Mechanical Turk
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Feature selection for content-based, time-varying musical emotion regression
Proceedings of the international conference on Multimedia information retrieval
Improving music emotion labeling using human computation
Proceedings of the ACM SIGKDD Workshop on Human Computation
Music Emotion Recognition
IEEE Transactions on Affective Computing
Ranking-Based Emotion Recognition for Music Organization and Retrieval
IEEE Transactions on Audio, Speech, and Language Processing
The acoustic emotion gaussians model for emotion-based music annotation and retrieval
Proceedings of the 20th ACM international conference on Multimedia
Hi-index | 0.00 |
Music is composed to be emotionally expressive, and emotional associations provide an especially natural domain for indexing and recommendation in today's vast digital music libraries. But such libraries require powerful automated tools, and the development of systems for automatic prediction of musical emotion presents a myriad challenges. The perceptual nature of musical emotion necessitates the collection of data from human subjects. The interpretation of emotion varies between listeners thus each clip needs to be annotated by a distribution of subjects. In addition, the sharing of large music content libraries for the development of such systems, even for academic research, presents complicated legal issues which vary by country. This work presents a new publicly available dataset for music emotion recognition research and a baseline system. In addressing the difficulties of emotion annotation we have turned to crowdsourcing, using Amazon Mechanical Turk, and have developed a two-stage procedure for filtering out poor quality workers. The dataset consists entirely of creative commons music from the Free Music Archive, which as the name suggests, can be shared freely without penalty. The final dataset contains 1000 songs, each annotated by a minimum of 10 subjects, which is larger than many currently available music emotion dataset.