Application of the Bayesian probability network to music scene analysis
Computational auditory scene analysis
WEDELMUSIC Format: An XML Music Notation Format for Emerging Applications
WEDELMUSIC '01 Proceedings of the First International Conference on WEB Delivering of Music (WEDELMUSIC'01)
Sound-source recognition: a theory and computational model
Sound-source recognition: a theory and computational model
Pitch-Dependent Identification of Musical Instrument Sounds
Applied Intelligence
Musical instrument recognition using cepstral coefficients and temporal features
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Musical instrument timbres classification with spectral features
EURASIP Journal on Applied Signal Processing
Discriminant feature analysis for music timbre recognition and automatic indexing
MCD'07 Proceedings of the 3rd ECML/PKDD international conference on Mining complex data
Blind music timbre source isolation by multi- resolution comparison of spectrum signatures
RSCTC'10 Proceedings of the 7th international conference on Rough sets and current trends in computing
Musical instrument identification based on new boosting algorithm with probabilistic decisions
CMMR'11 Proceedings of the 8th international conference on Speech, Sound and Music Processing: embracing research in India
A comparison of random forests and ferns on recognition of instruments in jazz recordings
ISMIS'12 Proceedings of the 20th international conference on Foundations of Intelligent Systems
Multi-label automatic indexing of music by cascade classifiers
Web Intelligence and Agent Systems
Hi-index | 0.00 |
We provide a new solution to the problem of feature variations caused by the overlapping of sounds in instrument identification in polyphonic music. When multiple instruments simultaneously play, partials (harmonic components) of their sounds overlap and interfere, which makes the acoustic features different from those of monophonic sounds. To cope with this, we weight features based on how much they are affected by overlapping. First, we quantitatively evaluate the influence of overlapping on each feature as the ratio of the within-class variance to the between-class variance in the distribution of training data obtained from polyphonic sounds. Then, we generate feature axes using a weighted mixture that minimizes the influence via linear discriminant analysis. In addition, we improve instrument identification using musical context. Experimental results showed that the recognition rates using both feature weighting and musical context were 84.1% for duo, 77.6% for trio, and 72.3% for quartet; those without using either were 53.4, 49.6, and 46.5%, respectively.