Sequential organization of speech in computational auditory scene analysis
Speech Communication
Universal Estimation of Information Measures for Analog Sources
Foundations and Trends in Communications and Information Theory
A new distance measure for hidden Markov models
Expert Systems with Applications: An International Journal
Analysis and recognition of NAM speech using HMM distances and visual information
IEEE Transactions on Audio, Speech, and Language Processing
Acoustically discriminative language model training with pseudo-hypothesis
Speech Communication
Measuring the privacy of user profiles in personalized information systems
Future Generation Computer Systems
Hi-index | 0.00 |
This paper proposes and evaluates a new statistical discrimination measure for hidden Markov models (HMMs) extending the notion of divergence, a measure of average discrimination information originally defined for two probability density functions. Similar distance measures have been proposed for the case of HMMs, but those have focused primarily on the stationary behavior of the models. However, in speech recognition applications, the transient aspects of the models have a principal role in the discrimination process and, consequently, capturing this information is crucial in the formulation of any discrimination indicator. This paper proposes the notion of average divergence distance (ADD) as a statistical discrimination measure between two HMMs, considering the transient behavior of these models. This paper provides an analytical formulation of the proposed discrimination measure, a justification of its definition based on the Viterbi decoding approach, and a formal proof that this quantity is well defined for a left-to-right HMM topology with a final nonemitting state, a standard model for basic acoustic units in automatic speech recognition (ASR) systems. Using experiments based on this discrimination measure, it is shown that ADD provides a coherent way to evaluate the discrimination dissimilarity between acoustic models.