An Algorithm that Learns What‘s in a Name
Machine Learning - Special issue on natural language learning
The Hierarchical Hidden Markov Model: Analysis and Applications
Machine Learning
Automatic segmentation of text into structured records
SIGMOD '01 Proceedings of the 2001 ACM SIGMOD international conference on Management of data
Accurate methods for the statistics of surprise and coincidence
Computational Linguistics - Special issue on using large corpora: I
Compacting the Penn Treebank grammar
COLING '98 Proceedings of the 17th international conference on Computational linguistics - Volume 1
Hierarchical hidden Markov models for information extraction
IJCAI'03 Proceedings of the 18th international joint conference on Artificial intelligence
Asset priority risk assessment using hidden markov models
Proceedings of the 10th ACM conference on SIG-information technology education
Hi-index | 0.00 |
This paper explores techniques to take advantage of the fundamental difference in structure between hidden Markov models (HMM) and hierarchical hidden Markov models (HHMM). The HHMM structure allows repeated parts of the model to be merged together. A merged model takes advantage of the recurring patterns within the hierarchy, and the clusters that exist in some sequences of observations, in order to increase the extraction accuracy. This paper also presents a new technique for reconstructing grammar rules automatically. This work builds on the idea of combining a phrase extraction method with HHMM to expose patterns within English text. The reconstruction is then used to simplify the complex structure of an HHMM The models discussed here are evaluated by applying them to natural language tasks based on CoNLL-2004 and a sub-corpus of the Lancaster Treebank