A Theory for Multiresolution Signal Decomposition: The Wavelet Representation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Local feature extraction and its applications using a library of bases
Local feature extraction and its applications using a library of bases
Comb and multiplexed wavelet transforms and their applications tosignal processing
IEEE Transactions on Signal Processing
Pitch-synchronous wavelet representations of speech and musicsignals
IEEE Transactions on Signal Processing
Adapted local trigonometric transforms and speech processing
IEEE Transactions on Signal Processing
Wavelet neural networks for function learning
IEEE Transactions on Signal Processing
Entropy-based algorithms for best basis selection
IEEE Transactions on Information Theory - Part 2
Analysis and synthesis of feedforward neural networks using discrete affine wavelet transformations
IEEE Transactions on Neural Networks
Wavelet entropy and neural network for text-independent speaker identification
Engineering Applications of Artificial Intelligence
The use of wavelet entropy in conjuction with neural network for Arabic vowels recognition
WSEAS Transactions on Signal Processing
Computers and Electrical Engineering
Neural network and wavelet average framing percentage energy for atrial fibrillation classification
Computer Methods and Programs in Biomedicine
Hi-index | 12.06 |
In this study, an expert speaker identification system is presented for speaker identification using Turkish speech signals. Here, a discrete wavelet adaptive network based fuzzy inference system (DWANFIS) model is used for this aim. This model consists of two layers: discrete wavelet and adaptive network based fuzzy inference system. The discrete wavelet layer is used for adaptive feature extraction in the time-frequency domain and is composed of discrete wavelet decomposition and discrete wavelet entropy. The performance of the used system is evaluated by using repeated speech signals. These test results show the effectiveness of the developed intelligent system presented in this paper. The rate of correct classification is about 90.55% for the sample speakers.