Exploring the relationship between categorical and dimensional emotion semantics of music

  • Authors:
  • Ju-Chiang Wang;Yi-Hsuan Yang;Kaichun Chang;Hsin-Min Wang;Shyh-Kang Jeng

  • Affiliations:
  • Academia Sinica & National Taiwan University, Taipei City, Taiwan Roc;Academia Sinica, Taipei City, Taiwan Roc;King's College London, London, United Kingdom;Academia Sinica, Taipei City, Taiwan Roc;National Taiwan University, Taipei City, Taiwan Roc

  • Venue:
  • Proceedings of the second international ACM workshop on Music information retrieval with user-centered and multimodal strategies
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Computational modeling of music emotion has been addressed primarily by two approaches: the categorical approach that categorizes emotions into mood classes and the dimensional approach that regards emotions as numerical values over a few dimensions such as valence and activation. Being two extreme scenarios (discrete/continuous), the two approaches actually share a unified goal of understanding the emotion semantics of music. This paper presents the first computational model that unifies the two semantic modalities under a probabilistic framework, which makes it possible to explore the relationship between them in a computational way. With the proposed framework, mood labels can be mapped into the emotion space in an unsupervised and content-based manner, without any training ground truth annotations for the semantic mapping. Such a function can be applied to automatically generate a semantically structured tag cloud in the emotion space. To demonstrate the effectiveness of the proposed framework, we qualitatively evaluate the mood tag clouds generated from two emotion-annotated corpora, and quantitatively evaluate the accuracy of the categorical-dimensional mapping by comparing the results with those created by psychologists, including the one proposed by Whissell & Plutchik and the one defined in the Affective Norms for English Words (ANEW).