Improving speaker verification using ALISP-Based specific GMMs

  • Authors:
  • Asmaa El Hannani;Dijana Petrovska-Delacrétaz

  • Affiliations:
  • DIVA Group, Informatics Dept, University of Fribourg, Switzerland;DIVA Group, Informatics Dept, University of Fribourg, Switzerland

  • Venue:
  • AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent years, research in speaker verification has expended from using only the acoustic content of speech to trying to utilise high level features of information, such as linguistic content, pronunciation and idiolectal word usage. Phone based models have been shown to be promising for speaker verification, but they require transcribed speech data in the training phase. The present paper describes a segmental Gaussian Mixture Models (GMM) for text-independent speaker verification system based on data-driven Automatic Language Independent Speech Processing (ALISP). This system uses GMMs on a segmental level in order to exploit the different amount of discrimination provided by the ALISP classes. We compared the segmental ALISP-based GMM method with a baseline global GMM system. Results obtained for the NIST 2004 Speaker Recognition Evaluation data showed that the segmental approach outperforms the baseline system. It showed also that not all of the ALISP units are contributing to the discrimination between speakers.