Using SVM as back-end classifier for language identification

  • Authors:
  • Hongbin Suo;Ming Li;Ping Lu;Yonghong Yan

  • Affiliations:
  • ThinkIT Speech Laboratory, Beijing, China;ThinkIT Speech Laboratory, Beijing, China;ThinkIT Speech Laboratory, Beijing, China;ThinkIT Speech Laboratory, Beijing, China

  • Venue:
  • EURASIP Journal on Audio, Speech, and Music Processing - Intelligent Audio, Speech, and Music Processing Applications
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Robust automatic language identification (LID) is a task of identifying the language froma short utterance spoken by an unknown speaker. One of the mainstream approaches named parallel phone recognition language modeling (PPRLM) has achieved a very good performance. The log-likelihood radio (LLR) algorithm has been proposed recently to normalize posteriori probabilities which are the outputs of back-end classifiers in PPRLM systems. Support vector machine (SVM) with radial basis function (RBF) kernel is adopted as the back-end classifier. But for the conventional SVM classifier, the output is not probability. We use a pairwise posterior probability estimation (PPPE) algorithm to calibrate the output of each classifier. The proposed approaches are evaluated on the 2005 National Institute of Standards and Technology (NIST). Language recognition evaluation databases and experiments show that the systems described in this paper produce comparable results to the existing arts.