Query by humming: musical information retrieval in an audio database
Proceedings of the third ACM international conference on Multimedia
Towards the digital music library: tune retrieval from acoustic input
Proceedings of the first ACM international conference on Digital libraries
Music, cognition, and computerized sound
A comparison of melodic database retrieval techniques using sung queries
Proceedings of the 2nd ACM/IEEE-CS joint conference on Digital libraries
Warping indexes with envelope transforms for query by humming
Proceedings of the 2003 ACM SIGMOD international conference on Management of data
Name that tune: a pilot study in finding a melody from a sung query
Journal of the American Society for Information Science and Technology
Signal Processing Methods for Music Transcription
Signal Processing Methods for Music Transcription
The fan-chirp transform for non-stationary harmonic signals
Signal Processing
Sound onset detection by applying psychoacoustic knowledge
ICASSP '99 Proceedings of the Acoustics, Speech, and Signal Processing, 1999. on 1999 IEEE International Conference - Volume 06
Information Retrieval for Music and Motion
Information Retrieval for Music and Motion
The vocalsearch music search engine
Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries
Music Recommendation and Discovery: The Long Tail, Long Fail, and Long Play in the Digital Music Space
An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics
An Introduction to Audio Content Analysis: Applications in Signal Processing and Music Informatics
Hi-index | 0.10 |
Singing or humming to a music search engine is an appealing multimodal interaction paradigm, particularly for small sized portable devices that are ubiquitous nowadays. The aim of this work is to overcome the main shortcoming of the existing query-by-humming (QBH) systems: their lack of scalability in terms of the difficulty of automatically extending the database of melodies from audio recordings. A method is proposed to extract the singing voice melody from polyphonic music providing the necessary information to index it as an element in the database. The search of a query pattern in the database is carried out combining note sequence matching and pitch time series alignment. A prototype system was developed and experiments are carried out pursuing a fair comparison between manual and automatic expansion of the database. In the light of the obtained performance (85% in the top-10), which is encouraging given the results reported to date, this can be considered a proof of concept that validates the approach.