Comparing content and context based similarity for musical data

  • Authors:
  • Ioannis Karydis;Katia Lida Kermanidis;Spyros Sioutas;Lazaros Iliadis

  • Affiliations:
  • Department of Informatics, Ionian University, Kerkyra 49100, Greece;Department of Informatics, Ionian University, Kerkyra 49100, Greece;Department of Informatics, Ionian University, Kerkyra 49100, Greece;Department of Forestry and Management of the Environment and Natural Resources, Democritus University of Thrace, Pandazidou 193, Orestiada 68200, Greece

  • Venue:
  • Neurocomputing
  • Year:
  • 2013

Quantified Score

Hi-index 0.01

Visualization

Abstract

Similarity measurement between two musical pieces is a hard problem. Humans perceive such similarity by employing a large amount of contextually semantic information. Commonly used content-based methodologies rely on data descriptors of limited semantic value, and thus are reaching a performance ''upper bound''. Recent research pertaining to contextual information assigned as free-form text (tags) in social networking services has indicated tags to be highly effective in improving the accuracy of music similarity. In this paper, a large scale (20k real music data) similarity measurement is performed using mainstream off-the-shelf methodologies relying on both content and context. In addition, the accuracy of the examined methodologies is tested against not only objective metadata but also real-life user listening data as well. Experimental results illustrate the conditionally substantial gains of the context-based methodologies and not a so close match of these methods with the similarity based on real-user listening data.