Discriminative training of HMMs for automatic speech recognition: A survey
Computer Speech and Language
Discriminative learning for minimum error classification [patternrecognition]
IEEE Transactions on Signal Processing
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Discriminative GMM-HMM acoustic model selection using two-level bayesian ying-yang harmony learning
IScIDE'12 Proceedings of the third Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
Hi-index | 0.00 |
This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focus on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at state level. The LVCSR results show that both MCE and RPCL perform better than Maximum Likelihood Estimation (MLE), while RPCL has better discriminative and generative abilities than MCE.