Parallel crawler architecture and web page change detection
WSEAS Transactions on Computers
Topical web crawling using weighted anchor text and web page change detection techniques
WSEAS Transactions on Information Science and Applications
Focused web crawler with revisit policy
Proceedings of the International Conference & Workshop on Emerging Trends in Technology
Freshness tuning in focused crawler
Proceedings of the International Conference & Workshop on Emerging Trends in Technology
Hi-index | 0.03 |
In this paper, we put forward a technique for parallel crawling of the web. The World Wide Web today is growing at a phenomenal rate. The size of the web as on February 2007 stands at around 29 billion pages. One of the most important uses of crawling the web is for indexing purposes and keeping web pages up-to-date, later used by search engine to serve the end user queries. The paper puts forward an architecture built on the lines of a client server architecture. It discuses a fresh approach for parallel crawling the web using multiple machines and integrates the trivial issues of crawling also. A major part of the web is dynamic and hence, a need arises to constantly update the changed web pages. We have used a three-step algorithm for page refreshment. This checks for whether the structure of a web page has been changed or not, the text content has been altered or whether an image is changed. For The server we have discussed a unique method for distribution of URLs to client machines after determination of their priority index. Also a minor variation to the method of prioritizing URLs on the basis of forward link count has been discussed to accommodate the purpose of frequency of update.