Speaker identification using the VQ-Based discriminative kernels

  • Authors:
  • Zhenchun Lei;Yingchun Yang;Zhaohui Wu

  • Affiliations:
  • College of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;College of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China;College of Computer Science and Technology, Zhejiang University, Hangzhou, P.R. China

  • Venue:
  • AVBPA'05 Proceedings of the 5th international conference on Audio- and Video-Based Biometric Person Authentication
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, a class of VQ-based discriminative kernel is proposed for speaker identification. Vector quantization is a well known method in speaker recognition, but its performance is not superior. The distortion of an utterance is accumulated, but the distortion source distribution on the codebook is discarded. We map an utterance to a vector by adopting the distribution and the average distortions on every code vector. Then the SVMs are used for classification. A one-versus-rest fashion is used for the problem of multiple classifications. Results on YOHO in text-independent case show that the method can improve the performance greatly and is comparative with the VQ and the basic GMM's performances.