Herbert west: deanonymizer

  • Authors:
  • Mihir Nanavati;Nathan Taylor;William Aiello;Andrew Warfield

  • Affiliations:
  • University of British Columbia;University of British Columbia;University of British Columbia;University of British Columbia

  • Venue:
  • HotSec'11 Proceedings of the 6th USENIX conference on Hot topics in security
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

The vast majority of scientific journal, conference, and grant selection processes withhold the names of the reviewers from the original submitters, taking a better-safe-than-sorry approach for maintaining collegiality within the small-world communities of academia. While the contents of a review may not color the long-term relationship between the submitter and the reviewer, it is best to not require us all to be saints. This paper raises the question of whether the assumption of reviewer anonymity still holds in the face of readily-available, high-quality machine learning toolkits. Our threat model focuses on how a member of a community might, over time, amass a large number of unblinded reviews by serving on a number of conference and grant selection committees. We show that with access to even a relatively small corpus of such reviews, simple classification techniques from existing toolkits successfully identify reviewers with reasonably high accuracy. We discuss the implications of the findings and describe some potential technical and policy-based countermeasures.