Music compositional intelligence with an affective flavor

  • Authors:
  • Roberto Legaspi;Yuya Hashimoto;Koichi Moriyama;Satoshi Kurihara;Masayuki Numao

  • Affiliations:
  • Osaka University, Ibaraki, Osaka, Japan;Osaka University, Ibaraki, Osaka, Japan;Osaka University, Ibaraki, Osaka, Japan;Osaka University, Ibaraki, Osaka, Japan;Osaka University, Ibaraki, Osaka, Japan

  • Venue:
  • Proceedings of the 12th international conference on Intelligent user interfaces
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

The consideration of human feelings in automated music generation by intelligent music systems, albeit a compelling theme, has received very little attention. This work aims to computationally specify a system's music compositional intelligence that tightly couples with the listener's affective perceptions. First, the system induces a model that describes the relationship between feelings and musical structures. The model is learned by applying the inductive logic programming paradigm of FOIL coupled with the Diverse Density weighting metric over a dataset that was constructed using musical score fragments that were hand-labeled by the listener according to a semantic differential scale that uses bipolar affective descriptor pairs. A genetic algorithm, whose fitness function is based on the acquired model and follows basic music theory, is then used to generate variants of the original musical structures. Lastly, the system creates chordal and non-chordal tones out of the GA-obtained variants. Empirical results show that the system is 80.6% accurate at the average in classifying the affective labels of the musical structures and that it is able to automatically generate musical pieces that stimulate four kinds of impressions, namely, favorable-unfavorable, bright-dark, happy-sad, and heartrending-not heartrending.