Improving music emotion labeling using human computation

  • Authors:
  • Brandon G. Morton;Jacquelin A. Speck;Erik M. Schmidt;Youngmoo. E. Kim

  • Affiliations:
  • Drexel University, Philadelphia, PA;Drexel University, Philadelphia, PA;Drexel University, Philadelphia, PA;Drexel University, Philadelphia, PA

  • Venue:
  • Proceedings of the ACM SIGKDD Workshop on Human Computation
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the field of Music Information Retrieval, there are many tasks that are not only difficult for machines to solve, but that also lack well-defined answers. In pursuing the automatic recognition of emotions within music, this lack of objectivity makes it difficult to train systems that rely on quantified labels for supervised machine learning. In recent years, researchers have begun to harness Human Computation for the collection of data spanning an excerpt of music. MoodSwings records dynamic (per-second) labels of players' mood ratings of music, in keeping with the unique time-varying nature of musical mood. Players collaborate to build consensus, ensuring the quality of data collected. We present an analysis of MoodSwings labels collected to date and propose several modifications for improving both the quality of the gameplay and the collected data as development moves forward.