Fixing the threshold for effective detection of near duplicate web documents in web crawling

  • Authors:
  • V. A. Narayana;P. Premchand;A. Govardhan

  • Affiliations:
  • Department of Computer Science & Engineering, CMR College of Engineering & Technology, Hyderabad, India;Department of Computer Science & Engineering, University College of Engineering, Osmania University, Hyderabad, AP, India;JNTUH College of Engineering, AP, India

  • Venue:
  • ADMA'10 Proceedings of the 6th international conference on Advanced data mining and applications: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The drastic development of the WWW in recent times has made the concept of Web Crawling receive remarkable significance. The voluminous amounts of web documents swarming the web have posed huge challenges to web search engines making their results less relevant to the users. The presence of duplicate and near duplicate web documents in abundance has created additional overheads for the search engines critically affecting their performance and quality which have to be removed to provide users with the relevant results for their queries. In this paper, we have presented a novel and efficient approach for the detection of near duplicate web pages in web crawling where the keywords are extracted from the crawled pages and the similarity score between two pages is calculated. The documents having similarity score greater than a threshold value are considered as near duplicates. In this paper we have fixed the threshold value.