C4.5: programs for machine learning
C4.5: programs for machine learning
Construction and Evaluation of a Robust Multifeature Speech/Music Discriminator
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97)-Volume 2 - Volume 2
Speech/music discrimination for multimedia applications
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 04
Sound classification in hearing aids inspired by auditory scene analysis
EURASIP Journal on Applied Signal Processing
New speech/music discrimination approach based on fundamental frequency estimation
Multimedia Tools and Applications
Improved use of continuous attributes in C4.5
Journal of Artificial Intelligence Research
EURASIP Journal on Advances in Signal Processing - Special issue on digital signal processing for hearing instruments
EURASIP Journal on Advances in Signal Processing - Special issue on digital signal processing for hearing instruments
Speech Enhancement
Multipitch Analysis of Polyphonic Music and Speech Signals Using an Auditory Model
IEEE Transactions on Audio, Speech, and Language Processing
IEEE Transactions on Audio, Speech, and Language Processing
IEEE Transactions on Audio, Speech, and Language Processing
Speech/music discrimination via energy density analysis
SLSP'13 Proceedings of the First international conference on Statistical Language and Speech Processing
Hi-index | 0.00 |
Digital hearing aids impose strong complexity and memory constraints on digital signal processing algorithms that implement different applications. This paper proposes a low complexity approach for automatic sound classification in digital hearing aids. The proposed scheme, which operates on a frame-by-frame basis, consists of two stages: analysis stage and classification stage. The analysis stage provides a set of low-complexity signal features derived from fundamental frequency (F0) estimation. Here, F0 estimation is performed by a decimated difference function, which results in a reduced-complexity analysis stage. The classification stage has been designed with the aim of reducing the complexity while maintaining high accuracy rates. Three low-complexity classifiers have been evaluated (tree-based C4.5, 1-Nearest Neighbor (1-NN) and a Multilayer Perceptron (MLP)), the MLP being chosen because it provides the best accuracy rates and fits to the computational and memory constraints of ultra low-power DSP-based hearing aids. The classification stage is composed of a MLP classifier followed by a Hidden Markov Model (HMM), providing a good trade-off solution between complexity and classification accuracy rate. The goal of the proposed approach is to perform a robust discrimination among speech/nonspeech parts of audio signals in commercial digital hearing aids, the computational cost being a critical issue. For the experiments, an audio database including speech, music and noise signals has been used.