Experiments with filtered detection of similar academic papers

  • Authors:
  • Yaakov HaCohen-Kerner;Aharon Tayeb

  • Affiliations:
  • Dept. of Computer Science, Jerusalem College of Technology, Jerusalem, Israel;Dept. of Computer Science, Jerusalem College of Technology, Jerusalem, Israel

  • Venue:
  • AIMSA'12 Proceedings of the 15th international conference on Artificial Intelligence: methodology, systems, and applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this research, we investigate the issue of efficient detection of similar academic papers. Given a specific paper, and a corpus of academic papers, most of the papers from the corpus are filtered out using a fast filter method. Then, 47 methods (baseline methods and combinations of them) are applied to detect similar papers, where 34 of the methods are variants of new methods. These 34 methods are divided into three new method sets: rare words, combinations of at least two methods, and compare methods between portions of the papers. Results achieved by some of the 34 heuristic methods are better than the results of previous heuristic methods, comparing to the results of the "Full Fingerprint" (FF) method, an expensive method that served as an expert. Nevertheless, the run time of the new methods is much more efficient than the run time of the FF method. The most interesting finding is a method called CWA(1) that computes the frequency of rare words that appear only once in both compared papers. This method has been found as an efficient measure to check whether two papers are similar.