A Two Stage Procedure for Phone Based Speaker Verfication
AVBPA '97 Proceedings of the First International Conference on Audio- and Video-Based Biometric Person Authentication
Improving speaker verification using ALISP-Based specific GMMs
AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
Data driven approaches to speech and language processing
Nonlinear Speech Modeling and Applications
Hi-index | 0.00 |
Current state-of-the-art speaker verification algorithms use Gaussian Mixture Models (GMM) to estimate the probability density function of the acoustic feature vectors. Previous studies have shown that phonemes have different discriminant power for the speaker verification task. In order to better exploit these differences, it seems reasonable to segment the speech in distinct speech classes and carry out the speaker modeling for each class separately. Because transcribing databases is a tedious task, we prefer to use data-driven segmentation methods. If the number of automatic classes is comparable to the number of phonetic units, we can make the hypothesis that these units correspond roughly to the phonetic units. We have decided to use the well known Dynamic Time Warping (DTW) method to evaluate the distance between two speech feature vectors. If the two speech segments belong to the same speech class, we could expect that the DTW distortion measure can capture the speaker specific characteristics. The novelty of the proposed method is the combination of the DTW distortion measure with data-driven segmentation tools. The first experimental results of the proposed method, in terms of Detection Error Tradeoff (DET) curves, are comparable to current state-of-the-art speaker verification results, as obtained in NIST speaker recognition evaluations.