Modeling emotional content of music using system identification

  • Authors:
  • M. D. Korhonen;D. A. Clausi;M. E. Jernigan

  • Affiliations:
  • Univ. of Waterloo, Ont., Canada;-;-

  • Venue:
  • IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Research was conducted to develop a methodology to model the emotional content of music as a function of time and musical features. Emotion is quantified using the dimensions valence and arousal, and system-identification techniques are used to create the models. Results demonstrate that system identification provides a means to generalize the emotional content for a genre of music. The average R2 statistic of a valid linear model structure is 21.9% for valence and 78.4% for arousal. The proposed method of constructing models of emotional content generalizes previous time-series models and removes ambiguity from classifiers of emotion.