Real life, real users, and real needs: a study and analysis of user queries on the web
Information Processing and Management: an International Journal
Rank aggregation methods for the Web
Proceedings of the 10th international conference on World Wide Web
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Modern Information Retrieval
The Philosophy of Information Retrieval Evaluation
CLEF '01 Revised Papers from the Second Workshop of the Cross-Language Evaluation Forum on Evaluation of Cross-Language Information Retrieval Systems
Web metasearch: rank vs. score based rank aggregation methods
Proceedings of the 2003 ACM symposium on Applied computing
Hi-index | 0.00 |
One problem domain of meta search is to combine and improve the precision of ranking results from various search systems. This paper describes a rank aggregation model that incorporates text analysis measure with existing rank-based method, e.g. Best Rank and Borda Rank, to aggregate search results from various search systems. This approach provides means to normalize the differences of rank methodology used by different search systems, justifying the potential of using contents analysis to improve the results relevancy in meta search. In this paper, we fully describe our approach on text normalization for meta search and present our rationality of using two rank-based methods in our model. We then evaluate and benchmark the performance of our model based on user judgment on results relevancy. Our experiment results show that when text analysis factor is taken into account, the results outperform the rank-based methods alone. This shows the potential of our model to complement current rank aggregation methods used in meta search.