Capturing Complementary Information via Reversed Filter Bank and Parallel Implementation with MFCC for Improved Text-Independent Speaker Identification

  • Authors:
  • Sandipan Chakroborty;Anindya Roy;Sourav Majumdar;Goutam Saha

  • Affiliations:
  • Indian Institute of Technology, India;Indian Institute of Technology, India;Indian Institute of Technology, India;Indian Institute of Technology, India

  • Venue:
  • ICCTA '07 Proceedings of the International Conference on Computing: Theory and Applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

A state of the art Speaker Identification (SI) system requires a robust feature extraction unit followed by a speaker modeling scheme for generalized representation of these features. Over the years, Mel-Frequency Cepstral Coefficients (MFCC) modeled on the human auditory system have been used as a standard acoustic feature set for SI applications. However, due to the structure of its filter bank, it captures vocal tract characteristics more effectively in the lower frequency regions. This work proposes a new set of features using a complementary filter bank structure which improves distinguishability of speaker specific cues present in the higher frequency zone. Unlike high level features that are difficult to extract, the proposed feature set involves little computational burden during the extraction process. When combined with MFCC via a parallel implementation of speaker models, the proposed feature improves performance baseline of MFCC based system. The proposition is validated by experiments conducted on two different kinds of databases namely YOHO (microphone speech) and POLYCOST (telephone speech) with two different classifier paradigms, namely Gaussian Mixture Models (GMM) and Polynomial Classifier (PC) and for various model orders.