A Large-Scale Evaluation of Acoustic and Subjective Music-Similarity Measures
Computer Music Journal
Computer
Input-agreement: a new mechanism for collecting data using human computation games
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
User-centered design of a social game to tag music
Proceedings of the ACM SIGKDD Workshop on Human Computation
Feature selection for content-based, time-varying musical emotion regression
Proceedings of the international conference on Multimedia information retrieval
Machine Recognition of Music Emotion: A Review
ACM Transactions on Intelligent Systems and Technology (TIST)
Validation of music metadata via game with a purpose
Proceedings of the 8th International Conference on Semantic Systems
1000 songs for emotional analysis of music
Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
Hi-index | 0.00 |
In the field of Music Information Retrieval, there are many tasks that are not only difficult for machines to solve, but that also lack well-defined answers. In pursuing the automatic recognition of emotions within music, this lack of objectivity makes it difficult to train systems that rely on quantified labels for supervised machine learning. In recent years, researchers have begun to harness Human Computation for the collection of data spanning an excerpt of music. MoodSwings records dynamic (per-second) labels of players' mood ratings of music, in keeping with the unique time-varying nature of musical mood. Players collaborate to build consensus, ensuring the quality of data collected. We present an analysis of MoodSwings labels collected to date and propose several modifications for improving both the quality of the gameplay and the collected data as development moves forward.