A study of the behavior of several methods for balancing machine learning training data
ACM SIGKDD Explorations Newsletter - Special issue on learning from imbalanced datasets
A new perspective to automatically rank scientific conferences using digital libraries
Information Processing and Management: an International Journal
Citation analysis of database publications
ACM SIGMOD Record
Measuring conference quality by mining program committee characteristics
Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries
Bringing PageRank to the citation analysis
Information Processing and Management: an International Journal
Co-ranking Authors and Documents in a Heterogeneous Network
ICDM '07 Proceedings of the 2007 Seventh IEEE International Conference on Data Mining
Learning to assess the quality of scientific conferences: a case study in computer science
Proceedings of the 9th ACM/IEEE-CS joint conference on Digital libraries
Co-authorship networks in the digital library research community
Information Processing and Management: an International Journal - Special issue: Infometrics
Proceedings of the 10th annual joint conference on Digital libraries
Hi-index | 0.00 |
In recent years, several national and community-driven conference rankings have been compiled. These rankings are often taken as indicators of reputation and used for a variety of purposes, such as evaluating the performance of academic institutions and individual scientists, or selecting target conferences for paper submissions. Current rankings are based on a combination of objective criteria and subjective opinions that are collated and reviewed through largely manual processes. In this setting, the aim of this paper is to shed light into the following question: to what extent existing conference rankings reflect objective criteria, specifically submission and acceptance statistics and bibliometric indicators? The paper specifically considers three conference rankings in the field of Computer Science: an Australian national ranking, a Brazilian national ranking and an informal community-built ranking. It is found that in all cases bibliometric indicators are the most important determinants of rank. It is also found that in all rankings, top-tier conferences can be identified with relatively high accuracy through acceptance rates and bibliometric indicators. On the other hand, acceptance rates and bibliometric indicators fail to discriminate between mid-tier and bottom-tier conferences.