Two-Dimensional PCA: A New Approach to Appearance-Based Face Representation and Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
An improved face recognition technique based on modular PCA approach
Pattern Recognition Letters
The equivalence of two-dimensional PCA to line-based PCA
Pattern Recognition Letters
Hi-index | 0.00 |
We present a unified framework for basis-based speaker adaptation techniques, which subsumes eigenvoice speaker adaptation using principal component analysis (PCA) and speaker adaptation using two-dimensional PCA (2DPCA). The basic idea is to partition a Gaussian mean vector of a hidden Markov model (HMM) for each state and mixture component into a group of subvectors and stack all the subvectors of a training speaker model into a matrix. The dimension of the matrix varies according to the dimension of the subvector. As a result, the basis vectors derived from the PCA of training model matrices have variable dimension and so does the speaker weight in the adaptation equation. When the amount of adaptation data is small, adaptation using the speaker weight of small dimension with the basis vectors of large dimension can give good performance, whereas when the amount of adaptation data is large, adaptation using the speaker weight of large dimension with the basis vectors of small dimension can give good performance. In the experimental results, when the dimension of basis vectors was chosen between those of the eigenvoice method and the 2DPCA-based method, the model showed the balanced performance between the eigenvoice method and the 2DPCA-based method.