Using random walks for question-focused sentence retrieval
HLT '05 Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing
LexRank: graph-based lexical centrality as salience in text summarization
Journal of Artificial Intelligence Research
Syntactic and semantic kernels for short text pair categorization
EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics
Syntactic Structural Kernels for Natural Language Interfaces to Databases
ECML PKDD '09 Proceedings of the European Conference on Machine Learning and Knowledge Discovery in Databases: Part I
Complex question answering: unsupervised learning approaches and experiments
Journal of Artificial Intelligence Research
Information Processing and Management: an International Journal
A survey on question answering technology from an information retrieval perspective
Information Sciences: an International Journal
Using concept-level random walk model and global inference algorithm for answer summarization
AIRS'11 Proceedings of the 7th Asia conference on Information Retrieval Technology
Hi-index | 0.00 |
We consider the problem of answering complex questions that require inferencing and synthesizing information from multiple documents and can be seen as a kind of topic-oriented, informative multi-document summarization. The stochastic, graph-based method for computing the relative importance of textual units (i.e. sentences) is very successful in generic summarization. In this method, a sentence is encoded as a vector in which each component represents the occurrence frequency (TF*IDF) of a word. However, the major limitation of the TF*IDF approach is that it only retains the frequency of the words and does not take into account the sequence, syntactic and semantic information. In this paper, we study the impact of syntactic and shallow semantic information in the graph-based method for answering complex questions.