Modern Information Retrieval
Machine Learning
A Comparative Study on Feature Selection in Text Categorization
ICML '97 Proceedings of the Fourteenth International Conference on Machine Learning
The health of research conferences and the dearth of big idea papers
Communications of the ACM - The Blogosphere
Toward alternative metrics of journal impact: a comparison of download and citation data
Information Processing and Management: an International Journal - Special issue: Infometrics
Measuring conference quality by mining program committee characteristics
Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries
Toward alternative measures for ranking venues: a case of database research community
Proceedings of the 7th ACM/IEEE-CS joint conference on Digital libraries
Towards usage-based impact metrics: first results from the mesur project.
Proceedings of the 8th ACM/IEEE-CS joint conference on Digital libraries
An ontological approach for the quality assessment of computer science conferences
ER'07 Proceedings of the 2007 conference on Advances in conceptual modeling: foundations and applications
AAAI'96 Proceedings of the thirteenth national conference on Artificial intelligence - Volume 1
An analysis of the evolving coverage of computer science sub-fields in the DBLP digital library
ECDL'10 Proceedings of the 14th European conference on Research and advanced technology for digital libraries
Did they notice? - a case-study on the community contribution to data quality in DBLP
TPDL'11 Proceedings of the 15th international conference on Theory and practice of digital libraries: research and advanced technology for digital libraries
Research endogamy as an indicator of conference quality
ACM SIGMOD Record
Hi-index | 0.00 |
Assessing the quality of scientific conferences is an important and useful service that can be provided by digital libraries and similar systems. This is specially true for fields such as Computer Science and Electric Engineering, where conference publications are crucial. However, the majority of the existing approaches for assessing the quality of publication venues has been proposed for journals. In this paper, we characterize a large number of features that can be used as criteria to assess the quality of scientific conferences and study how these several features can be automatically combined by means of machine learning techniques to effectively perform this task. Within the features studied are citations, submission and acceptance rates, tradition of the conference, and reputation of the program committee members. Among our several findings, we can cite that: (1) separating high quality conferences from medium and low quality ones can be performed quite effectively, but separating the last two types is a much harder task; and (2) citation features followed by those associated with the tradition of the conference are the most important ones for the task.