A view of the EM algorithm that justifies incremental, sparse, and other variants
Learning in graphical models
Learning Bayesian networks with local structure
Learning in graphical models
Training Hidden Markov Models with Multiple Observations-A Combinatorial Method
IEEE Transactions on Pattern Analysis and Machine Intelligence
Machine learning techniques for the computer security domain of anomaly detection
Machine learning techniques for the computer security domain of anomaly detection
Lightweight monitoring of MPI programs in real time: Research Articles
Concurrency and Computation: Practice & Experience
Incremental ML estimation of HMM parameters for efficient training
ICASSP '96 Proceedings of the Acoustics, Speech, and Signal Processing, 1996. on Conference Proceedings., 1996 IEEE International Conference - Volume 02
Space-efficient inference in dynamic probabilistic networks
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
Aspect oriented programming with hidden markov models to verify design use cases
Proceedings of the 8th ACM international conference on Aspect-oriented software development
Combining incremental Hidden Markov Model and Adaboost algorithm for anomaly intrusion detection
Proceedings of the ACM SIGKDD Workshop on CyberSecurity and Intelligence Informatics
Adaptive ROC-based ensembles of HMMs applied to anomaly detection
Pattern Recognition
Efficient modeling of discrete events for anomaly detection using hidden markov models
ISC'05 Proceedings of the 8th international conference on Information Security
A survey of techniques for incremental learning of HMM parameters
Information Sciences: an International Journal
Hi-index | 0.00 |
We address the problem of learning discrete hidden Markov models from very long sequences of observations. Incremental versions of the Baum-Welch algorithm that approximate the β-values used in the backward procedure are commonly used for this problem, since their memory complexity is independent of the sequence length. We introduce an improved incremental Baum-Welch algorithm with a new backward procedure that approximates the β-values based on a one-step lookahead in the training sequence. We justify the new approach analytically, and report empirical results that show it converges faster than previous incremental algorithms.