Using Dirichlet Mixture Priors to Derive Hidden Markov Models for Protein Families
Proceedings of the 1st International Conference on Intelligent Systems for Molecular Biology
A practical part-of-speech tagger
ANLC '92 Proceedings of the third conference on Applied natural language processing
Evolutionary induction of stochastic context free grammars
Pattern Recognition
Hi-index | 0.00 |
In this paper we investigate the performance of penalized variants of the forwards-backwards algorithm for training Hidden Markov Models. Maximum likelihood estimation of model parameters can result in over-fitting and poor generalization ability. We discuss the use of priors to compute maximum a posteriori estimates and describe a number of experiments in which models are trained under different conditions. Our results show that MAP estimation can alleviate over-fitting and help learn better parameter estimates.