A systematic study on parameter correlations in large-scale duplicate document detection

  • Authors:
  • Shaozhi Ye;Ji-Rong Wen;Wei-Ying Ma

  • Affiliations:
  • University of California, Department of Computer Science, 95616, Davis, CA, USA;Microsoft Research Asia, 95616, Beijing, CA, P.R. China;Microsoft Research Asia, 95616, Beijing, CA, P.R. China

  • Venue:
  • Knowledge and Information Systems
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although much work has been done on duplicate document detection (DDD) and its applications, we observe the absence of a systematic study on the performance and scalability of large-scale DDD algorithms. It is still unclear how various parameters in DDD correlate mutually, such as similarity threshold, precision/recall requirement, sampling ratio, and document size. This paper explores the correlations among several most important parameters in DDD and the impact of sampling ratio is of most interest since it heavily affects the accuracy and scalability of DDD algorithms. An empirical analysis is conducted on a million HTML documents from the TREC .GOV collection. Experimental results show that even when using the same sampling ratio, the precision of DDD varies greatly on documents with different sizes. Based on this observation, we propose an adaptive sampling strategy for DDD, which minimizes the sampling ratio with the constraint of a given precision requirement. We believe that the insights from our analysis are helpful for guiding the future large-scale DDD work.