Identification of control parameters in an articulatory vocal tract model, with applications to the synthesis of singing
A course in fuzzy systems and control
A course in fuzzy systems and control
Learning to speak. Sensori-motor control of speech movements
Speech Communication - Special issue on speech production: models and data
Towards a neurocomputational model of speech production and perception
Speech Communication
Real-Time Numerical Solution of Webster's Equation on A Nonuniform Grid
IEEE Transactions on Audio, Speech, and Language Processing
Simulation of Losses Due to Turbulence in the Time-Varying Vocal System
IEEE Transactions on Audio, Speech, and Language Processing
A fast approach for automatic generation of fuzzy rules by generalized dynamic fuzzy neural networks
IEEE Transactions on Fuzzy Systems
Online adaptive fuzzy neural identification and control of a class of MIMO nonlinear systems
IEEE Transactions on Fuzzy Systems
An intelligent adaptive control scheme for postsurgical blood pressure regulation
IEEE Transactions on Neural Networks
Model-Based Reproduction of Articulatory Trajectories for Consonant–Vowel Sequences
IEEE Transactions on Audio, Speech, and Language Processing
Articulatory Information for Noise Robust Speech Recognition
IEEE Transactions on Audio, Speech, and Language Processing
Hi-index | 0.00 |
Reproducing the smooth vocal tract trajectories is critical for high quality articulatory speech synthesis. This paper presents an adaptive neural control scheme for such a task using fuzzy logic and neural networks. The control scheme estimates motor commands from trajectories of flesh-points on selected articulators. These motor commands are then used to reproduce the trajectories of the underlying articulators in a 2nd order dynamical system. Initial experiments show that the control scheme is able to manipulate the mass-spring based elastic tract walls in a 2-dimensional articulatory synthesizer and to realize efficient speech motor control. The proposed controller achieves high accuracy during on-line tracking of the lips, the tongue, and the jaw in the simulation of consonant-vowel sequences. It also offers salient features such as generality and adaptability for future developments of control models in articulatory synthesis.