Speech Communication - Special issue on acoustic echo control and speech enhancement techniques
Linear Dimensionality Reduction via a Heteroscedastic Extension of LDA: The Chernoff Criterion
IEEE Transactions on Pattern Analysis and Machine Intelligence
Optimal kernel selection in Kernel Fisher discriminant analysis
ICML '06 Proceedings of the 23rd international conference on Machine learning
Pattern Recognition Letters
Linear dimensionality reduction using relevance weighted LDA
Pattern Recognition
Feature extraction based on minimum classification error/generalized probabilistic descent method
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: speech processing - Volume II
Kernel-based feature extraction with a speech technology application
IEEE Transactions on Signal Processing
Discriminative learning for minimum error classification [patternrecognition]
IEEE Transactions on Signal Processing
IEEE Transactions on Audio, Speech, and Language Processing
Discriminative Training for Large-Vocabulary Speech Recognition Using Minimum Classification Error
IEEE Transactions on Audio, Speech, and Language Processing
Optimization of temporal filters for constructing robust features in speech recognition
IEEE Transactions on Audio, Speech, and Language Processing
Hidden Markov Model-Based Weighted Likelihood Discriminant for 2-D Shape Classification
IEEE Transactions on Image Processing
Hi-index | 0.10 |
Feature extraction is an important component of pattern classification and speech recognition. Extracted features should discriminate classes from each other while being robust to environmental conditions such as noise. For this purpose, several feature transformations are proposed which can be divided into two main categories: data-dependent transformation and classifier-dependent transformation. The drawback of data-dependent transformation is that its optimization criteria are different from the measure of classification error which can potentially degrade the classifier's performance. In this paper, we propose a framework to optimize data-dependent feature transformations such as PCA (Principal Component Analysis), LDA (Linear Discriminant Analysis) and HLDA (Heteroscedastic LDA) using minimum classification error (MCE) as the main objective. The classifier itself is based on Hidden Markov Model (HMM). In our proposed HMM minimum classification error technique, the transformation matrices are modified to minimize the classification error for the mapped features, and the dimension of the feature vector is not changed. To evaluate the proposed methods, we conducted several experiments on the TIMIT phone recognition and the Aurora2 isolated word recognition tasks. The experimental results show that the proposed methods improve performance of PCA, LDA and HLDA transformation for mapping Mel-frequency cepstral coefficients (MFCC).