A beat tracking system for acoustic signals of music
MULTIMEDIA '94 Proceedings of the second ACM international conference on Multimedia
Discrete Time Processing of Speech Signals
Discrete Time Processing of Speech Signals
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Automatic singer identification
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 2
Musical instrument recognition using cepstral coefficients and temporal features
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Content based packet loss recovery for classical music transmissions over the internet
PCM'10 Proceedings of the Advances in multimedia information processing, and 11th Pacific Rim conference on Multimedia: Part II
Hi-index | 0.00 |
We propose a novel approach to detect semantic regions (pure vocals, pure instrumental and instrumental mixed vocals) in acoustic music signals. The acoustic music signal is first segmented at the beat level based on our proposed rhythm tracking algorithm. Then for each segment Cepstral coefficients are extracted from the Octave Scale to characterize music content. Finally, a hierarchical classification method is proposed to detect semantic regions. Different from previous methods, our proposed approach fully considers the music knowledge in segmenting and detecting the semantic regions in music signals. Experimental results illustrate that over 80% accuracy is achieved for semantic region detection.