A better way to learn features: technical perspective
Communications of the ACM
An efficient learning procedure for deep boltzmann machines
Neural Computation
Automatic speech recognition for under-resourced languages: A survey
Speech Communication
Deep feature learning using target priors with applications in ECoG signal decoding for BCI
IJCAI'13 Proceedings of the Twenty-Third international joint conference on Artificial Intelligence
Expert Systems with Applications: An International Journal
Hi-index | 0.02 |
Gaussian mixture models are currently the dominant technique for modeling the emission distribution of hidden Markov models for speech recognition. We show that better phone recognition on the TIMIT dataset can be achieved by replacing Gaussian mixture models by deep neural networks that contain many layers of features and a very large number of parameters. These networks are first pre-trained as a multi-layer generative model of a window of spectral feature vectors without making use of any discriminative information. Once the generative pre-training has designed the features, we perform discriminative fine-tuning using backpropagation to adjust the features slightly to make them better at predicting a probability distribution over the states of monophone hidden Markov models.