Copy detection mechanisms for digital documents
SIGMOD '95 Proceedings of the 1995 ACM SIGMOD international conference on Management of data
Syntactic clustering of the Web
Selected papers from the sixth international conference on World Wide Web
Similarity estimation techniques from rounding algorithms
STOC '02 Proceedings of the thiry-fourth annual ACM symposium on Theory of computing
Discovering informative content blocks from Web documents
Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining
Utilizing hyperlink transitivity to improve web page clustering
ADC '03 Proceedings of the 14th Australasian database conference - Volume 17
Finding near-duplicate web pages: a large-scale evaluation of algorithms
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Hi-index | 0.00 |
This paper tries to estimate redundancy level on the Web by employing information collected from existent search engines. To make measurements feasible, a representative set of Internet sites was collected using a random sampling of the Internet catalogs DMOZ and Delicious. Each page in the set was identified using a random 32-word phrase extracted from the content of the page. These phrases were used to perform search engine queries and infer the number of pages with the same content. Though the presented method is far from being perfectly accurate, it provides an approximation of a lower-bound for visible redundancy of the web---long phrases will likely belong to duplicate pages, and only the pages indexed by search engines are really visible to users. Obtained results showed a surprisingly low level of duplication averaged over all content types, with less then ten duplicates for most of the pages. This indicates that besides well-known classes of high-redundant content (news, mailing list archives, etc.), content duplication and plagiarism are not globally widespread across all types of webpages.