A text mining approach for definition question answering
FinTAL'06 Proceedings of the 5th international conference on Advances in Natural Language Processing
The TALP-QA system for spanish at CLEF 2005
CLEF'05 Proceedings of the 6th international conference on Cross-Language Evalution Forum: accessing Multilingual Information Repositories
A new algorithm for fast discovery of maximal sequential patterns in a document collection
CICLing'06 Proceedings of the 7th international conference on Computational Linguistics and Intelligent Text Processing
Language independent passage retrieval for question answering
MICAI'05 Proceedings of the 4th Mexican international conference on Advances in Artificial Intelligence
Overview of the CLEF 2004 multilingual question answering track
CLEF'04 Proceedings of the 5th conference on Cross-Language Evaluation Forum: multilingual Information Access for Text, Speech and Images
Overview of the CLEF 2006 multilingual question answering track
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Priberam's question answering system in a cross-language environment
CLEF'06 Proceedings of the 7th international conference on Cross-Language Evaluation Forum: evaluation of multilingual and multi-modal information retrieval
Answering questions with an n-gram based passage retrieval engine
Journal of Intelligent Information Systems
Learning to select the correct answer in multi-stream question answering
Information Processing and Management: an International Journal
Hi-index | 0.00 |
This paper describes a QA system centered in a full data-driven architecture. It applies machine learning and text mining techniques to identify the most probable answers to factoid and definition questions respectively. Its major quality is that it mainly relies on the use of lexical information and avoids applying any complex language processing resources such as named entity classifiers, parsers and ontologies. Experimental results on the Spanish Question Answering task at CLEF 2006 show that the proposed architecture can be a practical solution for monolingual question answering by reaching a precision as high as 51%.