Kernel Eigenspace-Based MLLR Adaptation

  • Authors:
  • Brian Kan-Wing Mak;Roger Wend-Huu Hsiao

  • Affiliations:
  • Dept. ofComputer Sci., Hong Kong Univ. of Sci. & Technol.;-

  • Venue:
  • IEEE Transactions on Audio, Speech, and Language Processing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this paper, we propose an application of kernel methods for fast speaker adaptation based on kernelizing the eigenspace-based maximum-likelihood linear regression adaptation method. We call our new method "kernel eigenspace-based maximum-likelihood linear regression adaptation" (KEMLLR). In KEMLLR, speaker-dependent (SD) models are estimated from a common speaker-independent (SI) model using MLLR adaptation, and the MLLR transformation matrices are mapped to a kernel-induced high-dimensional feature space, wherein kernel principal component analysis is used to derive a set of eigenmatrices. In addition, a composite kernel is used to preserve row information in the transformation matrices. A new speaker's MLLR transformation matrix is then represented as a linear combination of the leading kernel eigenmatrices, which, though exists only in the feature space, still allows the speaker's mean vectors to be found explicitly. As a result, at the end of KEMLLR adaptation, a regular hidden Markov model (HMM) is obtained for the new speaker and subsequent speech recognition is as fast as normal HMM decoding. KEMLLR adaptation was tested and compared with other adaptation methods on the Resource Management and Wall Street Journal tasks using 5 or 10 s of adaptation speech. In both cases, KEMLLR adaptation gives the greatest improvement over the SI model with 11%-20% word error rate reduction