Measuring redundancy level on the web

  • Authors:
  • Alexander Afanasyev;Chunyi Peng;Jiangzhe Wang;Lixia Zhang

  • Affiliations:
  • UCLA;UCLA;UCLA;UCLA

  • Venue:
  • AINTEC '11 Proceedings of the 7th Asian Internet Engineering Conference
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper tries to estimate redundancy level on the Web by employing information collected from existent search engines. To make measurements feasible, a representative set of Internet sites was collected using a random sampling of the Internet catalogs DMOZ and Delicious. Each page in the set was identified using a random 32-word phrase extracted from the content of the page. These phrases were used to perform search engine queries and infer the number of pages with the same content. Though the presented method is far from being perfectly accurate, it provides an approximation of a lower-bound for visible redundancy of the web---long phrases will likely belong to duplicate pages, and only the pages indexed by search engines are really visible to users. Obtained results showed a surprisingly low level of duplication averaged over all content types, with less then ten duplicates for most of the pages. This indicates that besides well-known classes of high-redundant content (news, mailing list archives, etc.), content duplication and plagiarism are not globally widespread across all types of webpages.