Semi-automatic semantic annotation tool for digital music

  • Authors:
  • Fazilatur Rahman;Jawed Siddiqi

  • Affiliations:
  • Sheffield Hallam University, Faculty of Arts, Computing, Engineering and Sciences, Sheffield, UK;Sheffield Hallam University, Faculty of Arts, Computing, Engineering and Sciences, Sheffield, UK

  • Venue:
  • iUBICOM'11 Proceedings of the 6th international conference on Ubiquitous and Collaborative Computing
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Worldwide Web/Internet has changed the music industry by making huge amount of music available to both music publishers and consumers including ordinary listeners or end users. The Web2.0 tagging techniques of music items by artist name, album title, musical style or genre (technically these are termed as syntactic metadata) have given rise to the generation unstructured free form vocabularies. Music search based on these syntactic metadata requires the search query to contain at least one keyword from that vocabulary and it must be an exact match. The semantic Web initiative by W3C proposes machine process-able representation of information but does not stipulate how that can be applied to music items specifically. In this paper we present a novel approach that details a semi-automatic semantic annotation tool to enable music producers to generate music metadata through a mapping between music consumers' free form tags and the acoustic metadata that are automatically extractable from music audio. The proposed annotation tool enables onotology guided annotation process and uses MPEG-7 Audio compliant music annotation ontology represented in dominant semantic web standard OWL 1.0.