User specific training of a music search engine

  • Authors:
  • David Little;David Raffensperger;Bryan Pardo

  • Affiliations:
  • EECS Department, Northwestern University, Evanston, IL;EECS Department, Northwestern University, Evanston, IL;EECS Department, Northwestern University, Evanston, IL

  • Venue:
  • MLMI'07 Proceedings of the 4th international conference on Machine learning for multimodal interaction
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Query-by-Humming (QBH) systems transcribe a sung or hummed query and search for related musical themes in a database, returning the most similar themes as a play list. A major obstacle to effective QBH is variation between user queries and the melodic targets used as database search keys. Since it is not possible to predict all individual singer profiles before system deployment, a robust QBH system should be able to adapt to different singers after deployment. Currently deployed systems do not have this capability. We describe a new QBH system that learns from user provided feedback on the search results, letting the system improve while deployed, after only a few queries. This is made possible by a trainable note segmentation system, an easily parameterized singer error model and a straight-forward genetic algorithm. Results show significant improvement in performance given only ten example queries from a particular user.