Does “authority” mean quality? predicting expert quality ratings of Web documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Query-based sampling of text databases
ACM Transactions on Information Systems (TOIS)
Evaluation by highly relevant documents
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Cumulated gain-based evaluation of IR techniques
ACM Transactions on Information Systems (TOIS)
SIAM Journal on Discrete Mathematics
Forming test collections with no system pooling
Proceedings of the 27th annual international ACM SIGIR conference on Research and development in information retrieval
Improving Web search efficiency via a locality based static pruning method
WWW '05 Proceedings of the 14th international conference on World Wide Web
Crawling a country: better strategies than breadth-first for web page ordering
WWW '05 Special interest tracks and posters of the 14th international conference on World Wide Web
Efficient PageRank approximation via graph aggregation
Information Retrieval
Distributed query sampling: a quality-conscious approach
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
A statistical method for system evaluation using incomplete judgments
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
On rank correlation in information retrieval evaluation
ACM SIGIR Forum
Robust test collections for retrieval evaluation
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Reliable information retrieval evaluation with incomplete and biased judgements
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
A semantic approach to contextual advertising
SIGIR '07 Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval
Score standardization for inter-collection comparison of retrieval systems
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Real-time automatic tag recommendation
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
A new rank correlation coefficient for information retrieval
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Evaluation over thousands of queries
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Relevance assessment: are judges exchangeable and does it matter
Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval
Improved query difficulty prediction for the web
Proceedings of the 17th ACM conference on Information and knowledge management
Comparing metrics across TREC and NTCIR: the robustness to system bias
Proceedings of the 17th ACM conference on Information and knowledge management
A similarity measure for indefinite rankings
ACM Transactions on Information Systems (TOIS)
Evaluation of query performance prediction methods by range
SPIRE'10 Proceedings of the 17th international conference on String processing and information retrieval
To re-rank or to re-query: can visual analytics solve this dilemma?
CLEF'11 Proceedings of the Second international conference on Multilingual and multimodal information access evaluation
Hi-index | 0.00 |
In Information Retrieval (IR), it is common practice to compare the rankings observed during an experiment --- the statistical procedure to compare rankings is called rank correlation. Rank correlation helps decide the success of new systems, models and techniques. To measure rank correlation, the most used coefficient is Kendall's *** . However, in IR, when computing the correlations, the most relevant, useful or interesting items should often be considered more important than the least important items. Despite its simplicity and widespread use, Kendall's *** little helps discriminate the items by importance. To overcome this drawback, in this paper, a family *** * of rank correlation coefficients for IR has been introduced for discriminating the rank correlation according to the rank of the items. The basis has been provided by the notion of gain previously utilized in retrieval effectiveness measurement. The probability distribution for *** * has also been provided.