Using statistical testing in the evaluation of retrieval experiments
SIGIR '93 Proceedings of the 16th annual international ACM SIGIR conference on Research and development in information retrieval
Information foraging in information access environments
CHI '95 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Results and challenges in Web search evaluation
WWW '99 Proceedings of the eighth international conference on World Wide Web
Authoritative sources in a hyperlinked environment
Journal of the ACM (JACM)
IR evaluation methods for retrieving highly relevant documents
SIGIR '00 Proceedings of the 23rd annual international ACM SIGIR conference on Research and development in information retrieval
Evaluation by highly relevant documents
Proceedings of the 24th annual international ACM SIGIR conference on Research and development in information retrieval
Object-level ranking: bringing order to Web objects
WWW '05 Proceedings of the 14th international conference on World Wide Web
Beyond PageRank: machine learning for static ranking
Proceedings of the 15th international conference on World Wide Web
Personalized ranking for digital libraries based on log analysis
Proceedings of the 10th ACM workshop on Web information and data management
Scholarly paper recommendation via user's recent research interests
Proceedings of the 10th annual joint conference on Digital libraries
CiteRank: combination similarity and static ranking with research paper searching
International Journal of Internet Technology and Secured Transactions
Citation count prediction: learning to estimate future citations for literature
Proceedings of the 20th ACM international conference on Information and knowledge management
To better stand on the shoulder of giants
Proceedings of the 12th ACM/IEEE-CS joint conference on Digital Libraries
Searching online book documents and analyzing book citations
Proceedings of the 2013 ACM symposium on Document engineering
Hi-index | 0.00 |
We propose a popularity weighted ranking algorithm for academic digital libraries that uses the popularity factor of a publication venue overcoming the limitations of impact factors. We compare our method with the naive PageRank, citation counts and HITS algorithm, three popular measures currently used to rank papers beyond lexical similarity. The ranking results are evaluated by discounted cumulative gain(DCG) method using four human evaluators. We show that our proposed ranking algorithm improves the DCG performance by 8.5% on average compared to naive PageRank, 16.3% compared to citation count and 23.2% compared to HITS. The algorithm is also evaluated by click through data from CiteSeer usage log.