On the Computational Complexity of Approximating Distributions by Probabilistic Automata
Machine Learning - Computational learning theory
Smooth on-line learning algorithms for hidden Markov models
Neural Computation
COLT' 98 Proceedings of the eleventh annual conference on Computational learning theory
The Cost of Achieving the Best Portfolio in Hindsight
Mathematics of Operations Research
Extended Stochastic Complexity and Minimax Relative Loss Analysis
ALT '99 Proceedings of the 10th International Conference on Algorithmic Learning Theory
Relative loss bounds for on-line density estimation with the exponential family of distributions
UAI'99 Proceedings of the Fifteenth conference on Uncertainty in artificial intelligence
Fisher information and stochastic complexity
IEEE Transactions on Information Theory
A decision-theoretic extension of stochastic complexity and its applications to learning
IEEE Transactions on Information Theory
Asymptotic minimax regret for data compression, gambling, and prediction
IEEE Transactions on Information Theory
Adaptive ROC-based ensembles of HMMs applied to anomaly detection
Pattern Recognition
Incremental Boolean combination of classifiers
MCS'11 Proceedings of the 10th international conference on Multiple classifier systems
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Hi-index | 0.00 |
In modeling various signals such as the speech signal by using the Hidden Markov Model (HMM), it is often required to adapt not only to the inherent nonstationarity of the signal, but to changes of sources (speakers) who yield the signal. The well known Baum-Welch algorithm tries to adjust HMM so as to optimize the fit between the model and the signal observed. In this paper we develop an algorithm, which we call the on-line Baum-Welch algorithm, by incorporating the learning rate into the off-line Baum-Welch algorithm. The algorithm performs in a series of trials. In each trial the algorithm somehow produces an HMM Mt, then receives a symbol sequence wt, incurring loss - ln Pr(wt|Mt) which is the negative log-likelihood of the HMM Mt evaluated at wt. The performance of the algorithm is measured by the additional total loss, which is called the regret, of the algorithm over the total loss of a standard algorithm, where the standard algorithm is taken to be a criterion for measuring the relative loss. We take the off-line Baum-Welch algorithm as such a standard algorithm. To evaluate the performance of an algorithm, we take the Gradient Descent algorithm. Our experiments show that the on-line Baum-Welch algorithm performs well as compared to the Gradient Descent algorithm. We carry out the experiments not only for artificial data, but for some reasonably realistic data which is made by transforming acoustic waveforms to symbol sequences through the vector quantization method. The results show that the on-line Baum-Welch algorithm adapts the change of speakers very well.