Hi-index | 0.00 |
Though the hidden Markov modeling (HMM) technique has been successfully applied to various speech recognition applications, it has one major limitation. It assumes state-conditioned stationarity of the observation vectors, implying that the occurrence of one observation vector is independent of others if these vectors are generated by the same state. In most of the situations, this assumption of stationarity is not valid as the time sequence of the observation vectors is highly correlated. In the present paper, we try to use this temporal correlation by conditioning the probability of the current observation vector on the current state as well as on the previous observation vectors. Results from an isolated word recognition experiment using discrete HMMs are reported to illustrate the point.