C4.5: programs for machine learning
C4.5: programs for machine learning
Locally Weighted Learning for Control
Artificial Intelligence Review - Special issue on lazy learning
Application of Temporal Descriptors to Musical Instrument Sound Recognition
Journal of Intelligent Information Systems
Speech recognition with dynamic bayesian networks
Speech recognition with dynamic bayesian networks
Sound-source recognition: a theory and computational model
Sound-source recognition: a theory and computational model
Redundancy reduction for computational audition, a unifying approach
Redundancy reduction for computational audition, a unifying approach
Estimation of musical sound separation algorithm effectiveness employing neural networks
Journal of Intelligent Information Systems - Special issue: Intelligent multimedia applications
Musical instrument recognition using cepstral coefficients and temporal features
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Discriminant feature analysis for music timbre recognition and automatic indexing
MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
Hi-index | 0.00 |
Identification of music instruments in polyphonic sounds is difficult and challenging, especially where heterogeneous harmonic partials are overlapping with each other. This has stimulated the research on sound separation for content-based automatic music information retrieval. Numerous successful approaches on musical data feature extraction and selection have been proposed for instrument recognition in monophonic sounds. Unfortunately, none of those algorithms can be successfully applied to polyphonic sounds. Based on recent research in sound classification of monophonic sounds and studies in speech recognition, Moving Picture Experts Group (MPEG) standardized a set of features of the digital audio content data for the purpose of interpretation of the informationmeaning. Most of themare in a formof largematrix or vector of large size, which are not suitable for traditional data mining algorithms; while other features in smaller size are not sufficient for instrument recognition in polyphonic sounds. Therefore, these acoustical features themselves alone cannot be successfully applied to classification of polyphonic sounds. However, these features contain critical information, which implies music instruments' signatures. We have proposed a novel music information retrieval system with MPEG-7-based descriptors and we built classifiers which can retrieve the important time-frequency timbre information and isolate sound sources in polyphonic musical objects, where two instruments are playing at the same time, by energy clustering between heterogeneous harmonic peaks.