Searching distributed collections with inference networks
SIGIR '95 Proceedings of the 18th annual international ACM SIGIR conference on Research and development in information retrieval
Analyses of multiple evidence combination
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Inquirus, the NECI meta search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Approaches to collection selection and results merging for distributed information retrieval
Proceedings of the tenth international conference on Information and knowledge management
Relevance score normalization for metasearch
Proceedings of the tenth international conference on Information and knowledge management
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
Result merging strategies for a current news metasearcher
Information Processing and Management: an International Journal
A semisupervised learning method to merge search engine results
ACM Transactions on Information Systems (TOIS)
Web metasearch: rank vs. score based rank aggregation methods
Proceedings of the 2003 ACM symposium on Applied computing
Evaluating sampling methods for uncooperative collections
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A formal approach to score normalization for meta-search
HLT '02 Proceedings of the second international conference on Human Language Technology Research
Relevance assessment: are judges exchangeable and does it matter
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Foundations and Trends in Information Retrieval
Hi-index | 0.00 |
Effective enterprise search must draw on a number of sources---for example web pages, telephone directories, and databases. Doing this means we need a way to make a single sorted list from results of very different types. Many merging algorithms have been proposed but none have been applied to this, realistic, application. We report the results of an experiment which simulates heterogeneous enterprise retrieval, in a university setting, and uses multi-grade expert judgements to compare merging algorithms. Merging algorithms considered include several variants of round-robin, several methods proposed by Rasolofo et al. in the Current News Metasearcher, and four novel variations including a learned multi-weight method. We find that the round-robin methods and one of the Rasolofo methods perform significantly worse than others. The GDS_TS method of Rasolofo achieves the highest average NDCG@10 score but the differences between it and the other GDS_methods, local reranking, and the multi-weight method were not significant.