Fine grained classification of named entities
COLING '02 Proceedings of the 19th international conference on Computational linguistics - Volume 1
Shallow parsing with conditional random fields
NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1
Fine-grained proper noun ontologies for question answering
SEMANET '02 Proceedings of the 2002 workshop on Building and using semantic networks - Volume 11
CONLL '03 Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003 - Volume 4
A LF based answer indexing method for encyclopedia question-answering system
AIRS'05 Proceedings of the Second Asia conference on Asia Information Retrieval Technology
Semantic passage segmentation based on sentence topics for question answering
Information Sciences: an International Journal
Sentence Topics Based Knowledge Acquisition for Question Answering
IEICE - Transactions on Information and Systems
Question Answering Based on Answer Trustworthiness
AIRS '09 Proceedings of the 5th Asia Information Retrieval Symposium on Information Retrieval Technology
Compositional question answering: A divide and conquer approach
Information Processing and Management: an International Journal
Effects of answer weight boosting in strategy-driven question answering
Information Processing and Management: an International Journal
Hi-index | 0.00 |
In many QA systems, fine-grained named entities are extracted by coarse-grained named entity recognizer and fine-grained named entity dictionary. In this paper, we describe a fine-grained Named Entity Recognition using Conditional Random Fields (CRFs) for question answering. We used CRFs to detect boundary of named entities and Maximum Entropy (ME) to classify named entity classes. Using the proposed approach, we could achieve an 83.2% precision, a 74.5% recall, and a 78.6% F1 for 147 fined-grained named entity types. Moreover, we reduced the training time to 27% without loss of performance compared to a baseline model. In the question answering, The QA system with passage retrieval and AIU archived about 26% improvement over QA with passage retrieval. The result demonstrated that our approach is effective for QA.