The myth of the double-blind review?: author identification using only citations
ACM SIGKDD Explorations Newsletter
Journal of the American Society for Information Science and Technology
NLTK: the Natural Language Toolkit
ETMTNLP '02 Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics - Volume 1
Impact of double-blind reviewing on SIGMOD publication rates
ACM SIGMOD Record
Journal of the American Society for Information Science and Technology
Information Flow in the Peer-Reviewing Process
SP '07 Proceedings of the 2007 IEEE Symposium on Security and Privacy
ACM Transactions on Information Systems (TOIS)
Foundations and Trends in Information Retrieval
Introduction to Information Retrieval
Introduction to Information Retrieval
Multiparty equality function computation in networks with point-to-point links
SIROCCO'11 Proceedings of the 18th international conference on Structural information and communication complexity
Mining writeprints from anonymous e-mails for forensic investigation
Digital Investigation: The International Journal of Digital Forensics & Incident Response
Authorship attribution based on a probabilistic topic model
Information Processing and Management: an International Journal
Exploiting innocuous activity for correlating users across sites
Proceedings of the 22nd international conference on World Wide Web
Hi-index | 0.00 |
The vast majority of scientific journal, conference, and grant selection processes withhold the names of the reviewers from the original submitters, taking a better-safe-than-sorry approach for maintaining collegiality within the small-world communities of academia. While the contents of a review may not color the long-term relationship between the submitter and the reviewer, it is best to not require us all to be saints. This paper raises the question of whether the assumption of reviewer anonymity still holds in the face of readily-available, high-quality machine learning toolkits. Our threat model focuses on how a member of a community might, over time, amass a large number of unblinded reviews by serving on a number of conference and grant selection committees. We show that with access to even a relatively small corpus of such reviews, simple classification techniques from existing toolkits successfully identify reviewers with reasonably high accuracy. We discuss the implications of the findings and describe some potential technical and policy-based countermeasures.