Investigation of silicon auditory models and generalization of linear discriminant analysis for improved speech recognition
Think globally, fit locally: unsupervised learning of low dimensional manifolds
The Journal of Machine Learning Research
Pattern Classification (2nd Edition)
Pattern Classification (2nd Edition)
Locality-sensitive hashing scheme based on p-stable distributions
SCG '04 Proceedings of the twentieth annual symposium on Computational geometry
Graph Embedding and Extensions: A General Framework for Dimensionality Reduction
IEEE Transactions on Pattern Analysis and Machine Intelligence
Maximum likelihood discriminant feature spaces
ICASSP '00 Proceedings of the Acoustics, Speech, and Signal Processing, 2000. on IEEE International Conference - Volume 02
Discriminant analysis in correlation similarity measure space
Proceedings of the 24th international conference on Machine learning
Gaussian kernel optimization for pattern classification
Pattern Recognition
Spherical discriminant analysis in semi-supervised speaker clustering
NAACL-Short '09 Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers
Locality sensitive discriminant analysis
IJCAI'07 Proceedings of the 20th international joint conference on Artifical intelligence
Linear discriminant analysis for improved large vocabulary continuous speech recognition
ICASSP'92 Proceedings of the 1992 IEEE international conference on Acoustics, speech and signal processing - Volume 1
Optimizing the kernel in the empirical feature space
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents a family of discriminative manifold learning approaches to feature space dimensionality reduction in noise robust automatic speech recognition (ASR). The specific goal of these techniques is to preserve local manifold structure in feature space while at the same time maximizing the separability between classes of feature vectors. In the manifold space, the relationships among the feature vectors are defined using nonlinear kernels. Two separate distance measures are used to characterize the kernels, namely the conventional Euclidean distance and a cosine-correlation based distance. The performance of the proposed techniques is evaluated on two task domains involving noise corrupted utterances of connected digits and read newspaper text. Performance is compared to existing approaches used for feature space transformations, including linear discriminant analysis (LDA) and locality preserving linear projections (LPP). The proposed approaches are found to provide a significant reduction in word error rate (WER) with respect to the more well-known techniques for a variety of noise conditions. Another contribution of the paper is to quantify the interaction between acoustic noise conditions and the shape and size of local neighborhoods which are used in manifold learning to define local relationships among feature vectors. Based on this analysis, a procedure for reducing the impact of varying acoustic conditions on manifold learning is proposed .