The nature of statistical learning theory
The nature of statistical learning theory
Connectionist Speech Recognition: A Hybrid Approach
Connectionist Speech Recognition: A Hybrid Approach
Cepstrum-Based Filter-Bank Design Using Discriminative Feature Extraction Training at Various Levels
ICASSP '97 Proceedings of the 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP '97)-Volume 2 - Volume 2
SVMTorch: support vector machines for large-scale regression problems
The Journal of Machine Learning Research
Effectiveness of error correcting output coding methods in ensemble and monolithic learning machines
Pattern Analysis & Applications
Maximum likelihood discriminant feature spaces
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Solving multiclass learning problems via error-correcting output codes
Journal of Artificial Intelligence Research
Linear discriminant analysis for improved large vocabulary continuous speech recognition
ICASSP'92 Proceedings of the 1992 IEEE international conference on Acoustics, speech and signal processing - Volume 1
Feature extraction based on minimum classification error/generalized probabilistic descent method
ICASSP'93 Proceedings of the 1993 IEEE international conference on Acoustics, speech, and signal processing: speech processing - Volume II
Hi-index | 0.10 |
Feature transformation techniques have been widely investigated to reduce feature redundancy and to introduce additional discriminative information with the aim to improve the performance of automatic speech recognition (ASR). In this paper, we propose a novel method to obtain discriminative feature transformation based on output coding technique for speech recognition. The output coding transformation projects the speech features from their original space to a new one where each dimension of the features captures information to distinguish different phones. Using polynomial expansion, the short-time spectral features are first expanded to a high-dimensional space where the generalized linear discriminant sequence kernel is applied on the sequences of input feature vectors. Then, the output coding transformation formulated via a set of linear SVMs projects the sequences of high dimensional vectors into a tractable low-dimensional feature space where the resultant features are well-separated continuous output codes for the subsequent multi-class classification problem. Our experimental results on the TIMIT corpus show that the proposed features achieve 10.5% ASR error rate reduction over the conventional spectral features.