Discriminative training of HMMs for automatic speech recognition: A survey
Computer Speech and Language
Discriminative learning for minimum error classification [patternrecognition]
IEEE Transactions on Signal Processing
Rival penalized competitive learning for clustering analysis, RBF net, and curve detection
IEEE Transactions on Neural Networks
Hi-index | 0.01 |
This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focuses on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at the levels of phones and/or hidden Markov states using the same training corpus. The results show that both the MCE and RPCL based methods perform better than the Maximum Likelihood Estimation (MLE) based method. Comparing with the MCE based method, the RPCL based methods have better discriminative and generalizing abilities on both two levels.