MLP internal representation as discriminative features for improved speaker recognition

  • Authors:
  • Dalei Wu;Andrew Morris;Jacques Koreman

  • Affiliations:
  • Institute of Phonetics, Saarland University, Saarbrücken, Germany;Institute of Phonetics, Saarland University, Saarbrücken, Germany;Institute of Phonetics, Saarland University, Saarbrücken, Germany

  • Venue:
  • NOLISP'05 Proceedings of the 3rd international conference on Non-Linear Analyses and Algorithms for Speech Processing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Feature projection by non-linear discriminant analysis (NLDA) can substantially increase classification performance. In automatic speech recognition (ASR) the projection provided by the pre-squashed outputs from a one hidden layer multi-layer perceptron (MLP) trained to recognise speech sub-units (phonemes) has previously been shown to significantly increase ASR performance. An analogous approach cannot be applied directly to speaker recognition because there is no recognised set of "speaker sub-units" to provide a finite set of MLP target classes, and for many applications it is not practical to train an MLP with one output for each target speaker. In this paper we show that the output from the second hidden layer (compression layer) of an MLP with three hidden layers trained to identify a subset of 100 speakers selected at random from a set of 300 training speakers in Timit, can provide a 77% relative error reduction for common Gaussian mixture model (GMM) based speaker identification.