QueST: querying music databases by acoustic and textual features

  • Authors:
  • Bin Cui;Ling Liu;Calton Pu;Jialie Shen;Kian-Lee Tan

  • Affiliations:
  • Peking University, Beijing, China;Georgia Institute of Technology, Atlanta;Georgia Institute of Technology, Atlanta;Singapore Management University, Singapore, Singapore;National Unviersity of Singapore, Singapore, Singapore

  • Venue:
  • Proceedings of the 15th international conference on Multimedia
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

With continued growth of music content available on the Internet, music information retrieval has attracted increasing attention. An important challenge for music searching is its ability to support both keyword and content based queries efficiently and with high precision. In this paper, we present a music query system - QueST (Query by acouStic and Textual features) to support both keyword and content based retrieval in large music databases. QueST has two distinct features. First, it provides new index schemes that can efficiently handle various queries within a uniform architecture. Concretely, we propose a hybrid structure consisting of Inverted file and Signature file to support keyword search. For content based query, we introduce the notion of similarity to capture various music semantics like melody and genre. We extract acoustic features from a music object, and map it to multiple high-dimension spaces with respect to the similarity notion using PCA and RBF neural network. Second, we design a result fusion scheme, called the Quick Threshold Algorithm, to speed up the processing of complex queries involving both textual and multiple acoustic features. Our experimental results show that QueST offers higher accuracy and efficiency compared to existing algorithms.