LVA/ICA'12 Proceedings of the 10th international conference on Latent Variable Analysis and Signal Separation
Supervised dictionary learning for music genre classification
Proceedings of the 2nd ACM International Conference on Multimedia Retrieval
Journal of Signal Processing Systems
Keyword spotting exploiting Long Short-Term Memory
Speech Communication
Blind source extraction for robust speech recognition in multisource noisy environments
Computer Speech and Language
Modelling non-stationary noise with spectral factorisation in automatic speech recognition
Computer Speech and Language
Computer Speech and Language
Rapid speaker adaptation using compressive sensing
Speech Communication
Fusion of parametric and non-parametric approaches to noise-robust ASR
Speech Communication
Recovering non-negative and combined sparse representations
Digital Signal Processing
IEEE/ACM Transactions on Audio, Speech and Language Processing (TASLP)
Hi-index | 0.00 |
This paper proposes to use exemplar-based sparse representations for noise robust automatic speech recognition. First, we describe how speech can be modeled as a linear combination of a small number of exemplars from a large speech exemplar dictionary. The exemplars are time-frequency patches of real speech, each spanning multiple time frames. We then propose to model speech corrupted by additive noise as a linear combination of noise and speech exemplars, and we derive an algorithm for recovering this sparse linear combination of exemplars from the observed noisy speech. We describe how the framework can be used for doing hybrid exemplar-based/HMM recognition by using the exemplar-activations together with the phonetic information associated with the exemplars. As an alternative to hybrid recognition, the framework also allows us to take a source separation approach which enables exemplar-based feature enhancement as well as missing data mask estimation. We evaluate the performance of these exemplar-based methods in connected digit recognition on the AURORA-2 database. Our results show that the hybrid system performed substantially better than source separation or missing data mask estimation at lower signal-to-noise ratios (SNRs), achieving up to 57.1% accuracy at SNR = -5 dB. Although not as effective as two baseline recognizers at higher SNRs, the novel approach offers a promising direction of future research on exemplar-based ASR.