Performance standards and evaluations in IR test collections: cluster-based retrieval models
Information Processing and Management: an International Journal
The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Investigating the statistical properties of user-generated documents
FQAS'11 Proceedings of the 9th international conference on Flexible Query Answering Systems
Improving retrieval of short texts through document expansion
SIGIR '12 Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
The prevalence of short and ill-written documents today has bought into question the effectiveness of various modern retrieval systems. We evaluated three retrieval systems, LSI, Keyphind and a Google simulator. The results showed that LSI performed better than Keyphind or the Google simulator. On the other hand, recall-precision graphs revealed that at low recall levels performance of the Google simulator was higher than those of LSI and Keyphind. When retrieval was weighted favouring more highly relevant documents the Google approach was favourable.