Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data
ICML '01 Proceedings of the Eighteenth International Conference on Machine Learning
Discriminative training of GMM for speaker identification
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 01
An HMM-based method for Thai spelling speech recognition
Computers & Mathematics with Applications
Hidden Conditional Random Fields
IEEE Transactions on Pattern Analysis and Machine Intelligence
Discriminative learning for minimum error classification [patternrecognition]
IEEE Transactions on Signal Processing
P-top-k queries in a probabilistic framework from information extraction models
Computers & Mathematics with Applications
Hi-index | 0.09 |
Hidden conditional random fields (HCRFs) directly model the conditional probability of a label sequence given observations. Compared to hidden Markov models (HMMs), HCRFs provide a number of benefits in modeling of speech signals. This paper presents a speaker modeling technique using a universal background model (UBM) approach with discriminative trained HCRFs. An efficient method is proposed for adapting the UBM to an HCRF-based speaker model, and it is further enhanced by discriminative training. For the identification of 300 speakers drawn from the MAT2000 database, the experimental results indicate that the HCRF-UBM approach consistently achieved the lowest error rate among the three approaches (GMM-UBM, HMM-UBM and HCRF-UBM) regardless of the length of the enrollment speech. This study also investigates the elapsed times of the training (enrollment) and testing processes, with results showing that the HCRF-UBM approach outperforms HMM-UBM for both elapsed times. Compared with HMM-UBM, this setup reduced the elapsed times of the training process by 50%. These experimental results indicate that HCRF-UBM enjoys potential for development in speaker modeling.