Space or time adaptive signal processing by neural network models
AIP Conference Proceedings 151 on Neural Networks for Computing
C4.5: programs for machine learning
C4.5: programs for machine learning
Artificial Intelligence Review - Special issue on lazy learning
KDD-Based Approach to Musical Instrument Sound Recognition
ISMIS '02 Proceedings of the 13th International Symposium on Foundations of Intelligent Systems
Application of Temporal Descriptors to Musical Instrument Sound Recognition
Journal of Intelligent Information Systems
Blind Separation of Multiple Speakers in a Multipath Environment
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97) -Volume 1 - Volume 1
Speech recognition with dynamic bayesian networks
Speech recognition with dynamic bayesian networks
Estimation of musical sound separation algorithm effectiveness employing neural networks
Journal of Intelligent Information Systems - Special issue: Intelligent multimedia applications
Musical instrument recognition using cepstral coefficients and temporal features
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Musical instrument timbres classification with spectral features
EURASIP Journal on Applied Signal Processing
Hierarchical Tree for Dissemination of Polyphonic Noise
RSCTC '08 Proceedings of the 6th International Conference on Rough Sets and Current Trends in Computing
Discriminant feature analysis for music timbre recognition and automatic indexing
MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
Mining scalar representations in a non-tagged music database
ISMIS'08 Proceedings of the 17th international conference on Foundations of intelligent systems
Blind signal separation of similar pitches and instruments in a noisy polyphonic domain
ISMIS'06 Proceedings of the 16th international conference on Foundations of Intelligent Systems
Hi-index | 0.00 |
Pitch and timber detection methods applicable to monophonic digital signals are common. Conversely, successful detection of multiple pitches and timbers in polyphonic time-invariant music signals remains a challenge. A review of these methods, sometimes called ''Blind Signal Separation'', is presented in this paper. We analyze how musically trained human listeners overcome resonance, noise, and overlapping signals to identify and isolate what instruments are playing and then what pitch each instrument is playing. The part of the instrument and pitch recognition system, presented in this paper, responsible for identifying the dominant instrument from a base signal uses temporal features proposed by Wieczorkowska [Slezak, D., Synak, P., Wieczorkowska, A., Wroblewski, J., 2002. Kdd-based approach to musical instrument sound recognition. Hacid, M.-S., Ras, Z.W., Zighed, D.A., Kodratoff, Y. (Eds.), Foundations of Intelligent Systems. Proceedings of 13th Symposium ISMIS 2002, Lyon, Franc 4519 Berlin, Heidelberg, pp. 28-36.] in addition to the standard 11 MPEG7 features. After retrieving a semantical match for that dominant instrument from the database, it creates a resulting foreign set of features to form a new synthetic basen signal which no longer bears the previously extracted dominant sound. The system may repeat this process until all recognizable dominant instruments are accounted for in the segment. The proposed methodology incorporates Knowledge Discovery, MPEG7 segmentation and Inverse Fourier Transforms.