Implementation of Web Crawler

  • Authors:
  • Pooja Gupta;Kalpana Johari

  • Affiliations:
  • -;-

  • Venue:
  • ICETET '09 Proceedings of the 2009 Second International Conference on Emerging Trends in Engineering & Technology
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

The World Wide Web is an interlinked collection of billions of documents formatted using HTML. Ironically the very size of this collection has become an obstacle for information retrieval. The user has to shift through scores of pages to come upon the information he/she desires. Web crawlers are the heart of search engines. Web crawlers continuously keep on crawling the web and find any new web pages that have been added to the web, pages that have been removed from the web. Due to growing and dynamic nature of the web; it has become a challenge to traverse all URLs in the web documents and to handle these URLs. A focused crawler is an agent that targets a particular topic and visits and gathers only relevant web pages. In this dissertation I had worked on design and working of web crawler that can be used for copyright infringement. We will take one seed URL as input and search with a keyword, the searching result is based on keyword and it will fetch the web pages where it will find that keyword. This focused based crawler approach retrieve documents that contain particular keyword from the user's query; we are implementing this using breadth-first search. Now, when we retrieved the web pages we will apply pattern recognition over text. We will give one file as input and apply the pattern recognition algorithms. Here, pattern symbolizes text only and check how much text is available on the web page. The algorithms that I had used for pattern search are Knutt-Morri-Pratt, Boyer-Moore, Finite Automata algorithm.