Unsupervised learning of soft patterns for generating definitions from online news
Proceedings of the 13th international conference on World Wide Web
Answering complex questions with random walk models
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Parsing syntactic and semantic dependencies for multiple languages with a pipeline approach
CoNLL '09 Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task
Summarizing definition from Wikipedia
ACL '09 Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1
Multi-document summarization based on unsupervised clustering
AIRS'06 Proceedings of the Third Asia conference on Information Retrieval Technology
Answering definition questions using web knowledge bases
IJCNLP'05 Proceedings of the Second international joint conference on Natural Language Processing
Re-ranking passages with LSA in a question answering system
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Automatic thesaurus construction for cross generation corpus
Journal on Computing and Cultural Heritage (JOCCH)
Hi-index | 0.00 |
Current researches on Question Answering concern more complex questions than factoid ones. Since complex questions are investigated by many researches, how to acquire accurate answers still becomes a core problem for complex QA. In this paper, we propose an approach that estimates the similarity by topic model. After summarizing relevant texts from web knowledge bases, an answer sentence acquisition model based on Probabilistic Latent Semantic Analysis is introduced to seek sentences, in which the topic is similar to those in definition set. Then, an answer ranking model is employed to select both statistically and semantically similar sentences between sentences retrieved and sentences in the relevant text set. Finally, sentences are ranked as answer candidates according to their scores. Experiments show that our approach achieves an increase of 5.19% F-score than the baseline system.