A System for Automatic Chord Transcription from Audio Using Genre-Specific Hidden Markov Models
Adaptive Multimedial Retrieval: Retrieval, User, and Semantics
Note onset detection for the transcription of polyphonic piano music
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
Audio signal representations for indexing in the transform domain
IEEE Transactions on Audio, Speech, and Language Processing
Towards timbre-invariant audio features for harmony-based music
IEEE Transactions on Audio, Speech, and Language Processing
Simultaneous estimation of chords and musical context from audio
IEEE Transactions on Audio, Speech, and Language Processing
Determination of nonprototypical valence and arousal in popular music: features and performances
EURASIP Journal on Audio, Speech, and Music Processing - Special issue on scalable audio-content analysis
Music thumbnailing incorporating harmony- and rhythm structure
AMR'08 Proceedings of the 6th international conference on Adaptive Multimedia Retrieval: identifying, Summarizing, and Recommending Image and Music
Automatic Chord Estimation from Audio: A Review of the State of the Art
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Automatic music transcription: challenges and future directions
Journal of Intelligent Information Systems
Hi-index | 0.00 |
We describe an acoustic chord transcription system that uses symbolic data to train hidden Markov models and gives best-of-class frame-level recognition results. We avoid the extremely laborious task of human annotation of chord names and boundaries-which must be done to provide machine learning models with ground truth-by performing automatic harmony analysis on symbolic music files. In parallel, we synthesize audio from the same symbolic files and extract acoustic feature vectors which are in perfect alignment with the labels. We, therefore, generate a large set of labeled training data with a minimal amount of human labor. This allows for richer models. Thus, we build 24 key-dependent HMMs, one for each key, using the key information derived from symbolic data. Each key model defines a unique state-transition characteristic and helps avoid confusions seen in the observation vector. Given acoustic input, we identify a musical key by choosing a key model with the maximum likelihood, and we obtain the chord sequence from the optimal state path of the corresponding key model, both of which are returned by a Viterbi decoder. This not only increases the chord recognition accuracy, but also gives key information. Experimental results show the models trained on synthesized data perform very well on real recordings, even though the labels automatically generated from symbolic data are not 100% accurate. We also demonstrate the robustness of the tonal centroid feature, which outperforms the conventional chroma feature.