Automatic combination of multiple ranked retrieval systems
SIGIR '94 Proceedings of the 17th annual international ACM SIGIR conference on Research and development in information retrieval
Analyses of multiple evidence combination
Proceedings of the 20th annual international ACM SIGIR conference on Research and development in information retrieval
Predicting the performance of linearly combined IR systems
Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Fusion Via a Linear Combination of Scores
Information Retrieval
SIGIR '84 Proceedings of the 7th annual international ACM SIGIR conference on Research and development in information retrieval
Automatic ranking of information retrieval systems using data fusion
Information Processing and Management: an International Journal
Proceedings of the 16th international conference on World Wide Web
IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB)
Hi-index | 0.00 |
Global ranking, a new information retrieval (IR) technology, uses a ranking model for cases in which there exist relationships between the objects to be ranked. In the ranking task, the ranking model is defined as a function of the properties of the objects as well as the relations between the objects. Existing global ranking approaches address the problem by "learning to rank". In this paper, we propose a global ranking framework that solves the problem via data fusion. The idea is to take each retrieved document as a pseudo-IR system. Each document generates a pseudo-ranked list by a global function. The data fusion algorithm is then adapted to generate the final ranked list. Taking a biomedical information extraction task, namely, interactor normalization task (INT), as an example, we explain how the problem can be formulated as a global ranking problem, and demonstrate how the proposed fusion-based framework outperforms baseline methods. By using the proposed framework, we improve the performance of the top 1 INT system by 3.2% using the official evaluation metric of the BioCreAtIvE challenge. In addition, by employing the standard ranking quality measure, NDCG, we demonstrate that the proposed framework can be cascaded with different local ranking models and improve their ranking results.