An investigation to find appropriate measures for evaluating interactive information retrieval
An investigation to find appropriate measures for evaluating interactive information retrieval
KEA: practical automatic keyphrase extraction
Proceedings of the fourth ACM conference on Digital libraries
Data mining: practical machine learning tools and techniques with Java implementations
Data mining: practical machine learning tools and techniques with Java implementations
Scaling question answering to the Web
Proceedings of the 10th international conference on World Wide Web
Communications of the ACM
The role of context in question answering systems
CHI '03 Extended Abstracts on Human Factors in Computing Systems
Journal of the American Society for Information Science and Technology
Personalizing search via automated analysis of interests and activities
Proceedings of the 28th annual international ACM SIGIR conference on Research and development in information retrieval
Text simplification for reading assistance: a project note
PARAPHRASE '03 Proceedings of the second international workshop on Paraphrasing - Volume 16
An adaptive system for the personalized access to news
AI Communications
User Modelling for Personalized Question Answering
AI*IA '07 Proceedings of the 10th Congress of the Italian Association for Artificial Intelligence on AI*IA 2007: Artificial Intelligence and Human-Oriented Computing
Hi-index | 0.00 |
Most question answering and information retrieval systems are insensitive to different users' needs and preferences, as well as their reading level. In (Quarteroni and Manandhar, 2006), we introduce a hybrid QA-IR system based on a a user model. In this paper we focus on how the system filters and re-ranks the search engine results for a query according to their reading difficulty, providing user-tailored answers.