Automating the assignment of submitted manuscripts to reviewers

  • Authors:
  • Susan T. Dumais;Jakob Nielsen

  • Affiliations:
  • Bellcore, 445 South Street, Morristown, NJ;Bellcore, 445 South Street, Morristown, NJ

  • Venue:
  • SIGIR '92 Proceedings of the 15th annual international ACM SIGIR conference on Research and development in information retrieval
  • Year:
  • 1992

Quantified Score

Hi-index 0.00

Visualization

Abstract

The 117 manuscripts submitted for the Hypertext '91 conference were assigned to members of the review committee, using a variety of automated methods based on information retrieval principles and Latent Semantic Indexing. Fifteen reviewers provided exhaustive ratings for the submitted abstracts, indicating how well each abstract matched their interests. The automated methods do a fairly good job of assigning relevant papers for review, but they are still somewhat poorer than assignments made manually by human experts and substantially poorer than an assignment perfectly matching the reviewers' own ranking of the papers. A new automated assignment method called “n of 2n” achieves better performance than human experts by sending reviewers more papers than they actually have to review and then allowing them to choose part of their review load themselves.