Review of neural networks for speech recognition
Neural Computation
Fundamentals of speech recognition
Fundamentals of speech recognition
Speech Communication - Special issue on speech processing in adverse conditions
Continuous speech recognition using linked predictive neural networks
ICASSP '91 Proceedings of the Acoustics, Speech, and Signal Processing, 1991. ICASSP-91., 1991 International Conference
On the predictive connectionist models for automatic speech recognition
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 06
IEEE Transactions on Audio, Speech, and Language Processing
A robust word boundary detection algorithm for variable noise-level environment in cars
IEEE Transactions on Intelligent Transportation Systems
A recurrent fuzzy-neural model for dynamic system identification
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Adaptation of hidden Markov models using maximum model distance algorithm
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
IEEE Transactions on Fuzzy Systems
Type-2 fuzzy hidden Markov models and their application to speech recognition
IEEE Transactions on Fuzzy Systems
Recurrent neuro-fuzzy networks for nonlinear process modeling
IEEE Transactions on Neural Networks
Hierarchical Singleton-Type Recurrent Neural Fuzzy Networks for Noisy Speech Recognition
IEEE Transactions on Neural Networks
3C intelligent home appliance control system - Example with refrigerator
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Expert Systems with Applications: An International Journal
Hi-index | 12.06 |
This paper proposes Mandarin phrase recognition using dynamic programming (DP) prediction errors of singleton-type recurrent neural fuzzy networks (SRNFNs). This method is called DP-SRNFN. The recurrent property of SRNFN makes it suitable for processing temporal speech patterns. A Mandarin phrase comprises monosyllabic words. SRNFN training is based on the word unit. There are N"w SRNFNs for modeling N"w words, and each SRNFN receives the current frame feature and predicts the next one of its modeling word. In recognizing N"P phrases, the prediction error of each trained SRNFN is computed, and DP is used to find the optimal path that maps the input frames to the best matched SRNFNs (words) for each of the N"P phrases. The accumulated error of each phrase model is computed from its optimal path and the one with the minimum error is the recognition result. To verify DP-SRNFN performance, this study conducted experiments on recognizing 30 Mandarin phrases. SRNFN training with noisy features for phrase recognition under different noisy environments was also conducted. DP-SRNFN performance is compared with the hidden Markov models (HMMs). Results show that DP-SRNFN achieves higher recognition rates than HMM in both clean and noisy environments.