1000 songs for emotional analysis of music

  • Authors:
  • Mohammad Soleymani;Micheal N. Caro;Erik M. Schmidt;Cheng-Ya Sha;Yi-Hsuan Yang

  • Affiliations:
  • Imperial College London, London, United Kingdom;Drexel University, Philadelphia, PA, USA;Drexel University, Philadelphia, PA, USA;National Taiwan University, Taipei, Taiwan Roc;Academia Sinica, Taipei, Taiwan Roc

  • Venue:
  • Proceedings of the 2nd ACM international workshop on Crowdsourcing for multimedia
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Music is composed to be emotionally expressive, and emotional associations provide an especially natural domain for indexing and recommendation in today's vast digital music libraries. But such libraries require powerful automated tools, and the development of systems for automatic prediction of musical emotion presents a myriad challenges. The perceptual nature of musical emotion necessitates the collection of data from human subjects. The interpretation of emotion varies between listeners thus each clip needs to be annotated by a distribution of subjects. In addition, the sharing of large music content libraries for the development of such systems, even for academic research, presents complicated legal issues which vary by country. This work presents a new publicly available dataset for music emotion recognition research and a baseline system. In addressing the difficulties of emotion annotation we have turned to crowdsourcing, using Amazon Mechanical Turk, and have developed a two-stage procedure for filtering out poor quality workers. The dataset consists entirely of creative commons music from the Free Music Archive, which as the name suggests, can be shared freely without penalty. The final dataset contains 1000 songs, each annotated by a minimum of 10 subjects, which is larger than many currently available music emotion dataset.