Tagging and chunking with bigrams
COLING '00 Proceedings of the 18th conference on Computational linguistics - Volume 2
Language Understanding Using Two-Level Stochastic Models with POS and Semantic Units
TSD '01 Proceedings of the 4th International Conference on Text, Speech and Dialogue
Shallow parsing using specialized hmms
The Journal of Machine Learning Research
Introduction to the CoNLL-2000 shared task: chunking
ConLL '00 Proceedings of the 2nd workshop on Learning language in logic and the 4th conference on Computational natural language learning - Volume 7
ConLL '01 Proceedings of the 2001 workshop on Computational Natural Language Learning - Volume 7
Hi-index | 0.01 |
In this work, we present a stochastic approach to shallow parsing. Most of the current approaches to shallow parsing have a common characteristic: they take the sequence of lexical tags proposed by a POS tagger as input for the chunking process. Our system produces tagging and chunking in a single process using an Integrated Language Model (ILM) formalized as Markov Models. This model integrates several knowledge sources: lexical probabilities, a contextual Language Model (LM) for every chunk, and a contextual LM for the sentences. We have extended the ILM by adding lexical information to the contextual LMs. We have applied this approach to the CoNLL-2000 shared task improving the performance of the chunker.