A comparative study of RPCL and MCE based discriminative training methods for LVCSR

  • Authors:
  • Zaihu Pang;Xihong Wu;Lei Xu

  • Affiliations:
  • Speech and Hearing Research Center, Key Laboratory of Machine Perception, (Ministry of Education), Peking University, China;Speech and Hearing Research Center, Key Laboratory of Machine Perception, (Ministry of Education), Peking University, China;Speech and Hearing Research Center, Key Laboratory of Machine Perception, (Ministry of Education), Peking University, China,Department of Computer Science and Engineering, The Chinese University o ...

  • Venue:
  • IScIDE'11 Proceedings of the Second Sino-foreign-interchange conference on Intelligent Science and Intelligent Data Engineering
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a comparative study of two discriminative methods, i.e., Rival Penalized Competitive Learning (RPCL) and Minimum Classification Error (MCE), for the tasks of Large Vocabulary Continuous Speech Recognition (LVCSR). MCE aims at minimizing a smoothed sentence error on training data, while RPCL focus on avoiding misclassification through enforcing the learning of correct class and de-learning its best rival class. For a fair comparison, both the two discriminative mechanisms are implemented at state level. The LVCSR results show that both MCE and RPCL perform better than Maximum Likelihood Estimation (MLE), while RPCL has better discriminative and generative abilities than MCE.