An Optimal Transformation for Discriminant and Principal Component Analysis
IEEE Transactions on Pattern Analysis and Machine Intelligence
Introduction to statistical pattern recognition (2nd ed.)
Introduction to statistical pattern recognition (2nd ed.)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Fractional-Step Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Hi-index | 0.00 |
In the speech feature extraction procedure, the relative simple strategy to promote the discriminant of feature vectors is to plus their deltas. Followed the dimension of the feature vector will increase remarkably. Therefore, how to effectively decrease the feature space dimension is key to the performance of calculation. In this paper, a step-weighted linear discriminant dimensionality reduction technique is proposed. Dimensionality reduction using the linear discriminant analysis (LDA) is commonly based on optimization of certain separability criteria in the output space. The resulting optimization problem using LDA is linear, but these separability criteria are not related to the classification accuracy in the output space directly. As a result, even the best weighting function among the input-space results in poor classification of data in the output-space. Through the step-weighted linear discriminant dimensionality reduction technique, we can adjust the weight function of between-class scatter matrix based on the output-space when one dimension is reduced. We describe this method and present an application to a speaker-independent isolated digit recognition task.