Citation Analysis in Research Evaluation (Information Science & Knowledge Management)
Citation Analysis in Research Evaluation (Information Science & Knowledge Management)
Caveats for the use of citation indicators in research and journal evaluations
Journal of the American Society for Information Science and Technology
Problems of citation analysis: A study of uncited and seldom-cited influences
Journal of the American Society for Information Science and Technology
Towards a new crown indicator: an empirical analysis
Scientometrics
A rejoinder on energy versus impact indicators
Scientometrics
Energy indicators and percentile ranking normalization
Scientometrics
Assessing non-standard article impact using F1000 labels
Scientometrics
Hi-index | 0.00 |
In reaction to a previous critique (Opthof and Leydesdorff, J Informetr 4(3):423---430, 2010), the Center for Science and Technology Studies (CWTS) in Leiden proposed to change their old "crown" indicator in citation analysis into a new one. Waltman (Scientometrics 87:467---481, 2011a) argue that this change does not affect rankings at various aggregated levels. However, CWTS data is not publicly available for testing and criticism. Therefore, we comment by using previously published data of Van Raan (Scientometrics 67(3):491---502, 2006) to address the pivotal issue of how the results of citation analysis correlate with the results of peer review. A quality parameter based on peer review was neither significantly correlated with the two parameters developed by the CWTS in the past citations per paper/mean journal citation score (CPP/JCSm) or CPP/FCSm (citations per paper/mean field citation score) nor with the more recently proposed h-index (Hirsch, Proc Natl Acad Sci USA 102(46):16569---16572, 2005). Given the high correlations between the old and new "crown" indicators, one can expect that the lack of correlation with the peer-review based quality indicator applies equally to the newly developed ones.