Searching through a speech memory for text-independent speaker verification

  • Authors:
  • Dijana Petrovska-Delacrétaz;Asmaa El Hannani;Gérard Chollet

  • Affiliations:
  • DIVA Group, University of Fribourg, Informatics Dept., Switzerland;DIVA Group, University of Fribourg, Informatics Dept., Switzerland;ENST, TSI, Paris, France

  • Venue:
  • AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Current state-of-the-art speaker verification algorithms use Gaussian Mixture Models (GMM) to estimate the probability density function of the acoustic feature vectors. Previous studies have shown that phonemes have different discriminant power for the speaker verification task. In order to better exploit these differences, it seems reasonable to segment the speech in distinct speech classes and carry out the speaker modeling for each class separately. Because transcribing databases is a tedious task, we prefer to use data-driven segmentation methods. If the number of automatic classes is comparable to the number of phonetic units, we can make the hypothesis that these units correspond roughly to the phonetic units. We have decided to use the well known Dynamic Time Warping (DTW) method to evaluate the distance between two speech feature vectors. If the two speech segments belong to the same speech class, we could expect that the DTW distortion measure can capture the speaker specific characteristics. The novelty of the proposed method is the combination of the DTW distortion measure with data-driven segmentation tools. The first experimental results of the proposed method, in terms of Detection Error Tradeoff (DET) curves, are comparable to current state-of-the-art speaker verification results, as obtained in NIST speaker recognition evaluations.