Polyphonic musical instrument recognition based on a dynamic model of the spectral envelope
ICASSP '09 Proceedings of the 2009 IEEE International Conference on Acoustics, Speech and Signal Processing
Instrument-Specific Harmonic Atoms for Mid-Level Music Representation
IEEE Transactions on Audio, Speech, and Language Processing
Pattern induction and matching in music signals
CMMR'10 Proceedings of the 7th international conference on Exploring music contents
Journal of Intelligent Information Systems
Context-Aware features for singing voice detection in polyphonic music
AMR'11 Proceedings of the 9th international conference on Adaptive Multimedia Retrieval: large-scale multimedia retrieval and evaluation
Multi-pitch Streaming of Harmonic Sound Mixtures
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Hi-index | 0.00 |
We present a computational model of musical instrument sounds that focuses on capturing the dynamic behavior of the spectral envelope. A set of spectro-temporal envelopes belonging to different notes of each instrument are extracted by means of sinusoidal modeling and subsequent frequency interpolation, before being subjected to principal component analysis. The prototypical evolution of the envelopes in the obtained reduced-dimensional space is modeled as a nonstationary Gaussian Process. This results in a compact representation in the form of a set of prototype curves in feature space, or equivalently of prototype spectro-temporal envelopes in the time-frequency domain. Finally, the obtained models are successfully evaluated in the context of two music content analysis tasks: classification of instrument samples and detection of instruments in monaural polyphonic mixtures.