The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
WebBase: a repository of Web pages
Proceedings of the 9th international World Wide Web conference on Computer networks : the international journal of computer and telecommunications netowrking
An adaptive model for optimizing performance of an incremental web crawler
Proceedings of the 10th international conference on World Wide Web
Breadth-first crawling yields high-quality pages
Proceedings of the 10th international conference on World Wide Web
ACM Transactions on Internet Technology (TOIT)
Mercator: A scalable, extensible Web crawler
World Wide Web
Extrapolation methods for accelerating PageRank computations
WWW '03 Proceedings of the 12th international conference on World Wide Web
Design and Implementation of a High-Performance Distributed Web Crawler
ICDE '02 Proceedings of the 18th International Conference on Data Engineering
Webcrawler: finding what people want
Webcrawler: finding what people want
High performance crawling system
Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval
UbiCrawler: a scalable fully distributed web crawler
Software—Practice & Experience
VLDB '05 Proceedings of the 31st international conference on Very large data bases
AggregateRank: bringing order to web sites
SIGIR '06 Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval
Stanford WebBase components and applications
ACM Transactions on Internet Technology (TOIT)
On the feasibility of geographically distributed web crawling
Proceedings of the 3rd international conference on Scalable information systems
A Scalable Lightweight Distributed Crawler for Crawling with Limited Resources
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Ranking Web Pages Using Machine Learning Approaches
WI-IAT '08 Proceedings of the 2008 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
Sitemaps: above and beyond the crawl of duty
Proceedings of the 18th international conference on World wide web
Proceedings of the 3rd workshop on Information credibility on the web
Measuring the Search Effectiveness of a Breadth-First Crawl
ECIR '09 Proceedings of the 31th European Conference on IR Research on Advances in Information Retrieval
The impact of crawl policy on web search effectiveness
Proceedings of the 32nd international ACM SIGIR conference on Research and development in information retrieval
Intelligent Crawling in Virtual Worlds
WI-IAT '09 Proceedings of the 2009 IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology - Volume 03
On the feasibility of multi-site web search engines
Proceedings of the 18th ACM conference on Information and knowledge management
Development of a large-scale web crawler and search engine infrastructure
Proceedings of the 3rd International Universal Communication Symposium
SHARC: framework for quality-conscious web archiving
Proceedings of the VLDB Endowment
Using Naming Authority to Rank Data and Ontologies for Web Search
ISWC '09 Proceedings of the 8th International Semantic Web Conference
Foundations and Trends in Information Retrieval
The architecture and implementation of an extensible web crawler
NSDI'10 Proceedings of the 7th USENIX conference on Networked systems design and implementation
Scale-adaptable recrawl strategies for DHT-based distributed web crawling system
NPC'10 Proceedings of the 2010 IFIP international conference on Network and parallel computing
CAMEO: continuous analytics for massively multiplayer online games on cloud resources
Euro-Par'09 Proceedings of the 2009 international conference on Parallel processing
Enabling high-performance internet-wide measurements on windows
PAM'10 Proceedings of the 11th international conference on Passive and active measurement
Piccolo: building fast, distributed programs with partitioned tables
OSDI'10 Proceedings of the 9th USENIX conference on Operating systems design and implementation
ISWC'10 Proceedings of the 9th international semantic web conference on The semantic web - Volume Part I
The SHARC framework for data quality in Web archiving
The VLDB Journal — The International Journal on Very Large Data Bases
An empirical study of vocabulary relatedness and its application to recommender systems
ISWC'11 Proceedings of the 10th international conference on The semantic web - Volume Part I
Discovering URLs through user feedback
Proceedings of the 20th ACM international conference on Information and knowledge management
Probabilistic near-duplicate detection using simhash
Proceedings of the 20th ACM international conference on Information and knowledge management
Intelligent Social Media Indexing and Sharing Using an Adaptive Indexing Search Engine
ACM Transactions on Intelligent Systems and Technology (TIST)
An empirical survey of Linked Data conformance
Web Semantics: Science, Services and Agents on the World Wide Web
On the diversity and availability of temporal information in linked open data
ISWC'12 Proceedings of the 11th international conference on The Semantic Web - Volume Part I
Webzeitgeist: design mining the web
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Relatedness between vocabularies on the Web of data: A taxonomy and an empirical study
Web Semantics: Science, Services and Agents on the World Wide Web
Hi-index | 0.00 |
This paper shares our experience in designing a web crawler that can download billions of pages using a single-server implementation and models its performance. We show that with the quadratically increasing complexity of verifying URL uniqueness, BFS crawl order, and fixed per-host rate-limiting, current crawling algorithms cannot effectively cope with the sheer volume of URLs generated in large crawls, highly-branching spam, legitimate multi-million-page blog sites, and infinite loops created by server-side scripts. We offer a set of techniques for dealing with these issues and test their performance in an implementation we call IRLbot. In our recent experiment that lasted 41 days, IRLbot running on a single server successfully crawled 6.3 billion valid HTML pages ($7.6$ billion connection requests) and sustained an average download rate of 319 mb/s (1,789 pages/s). Unlike our prior experiments with algorithms proposed in related work, this version of IRLbot did not experience any bottlenecks and successfully handled content from over 117 million hosts, parsed out 394 billion links, and discovered a subset of the web graph with 41 billion unique nodes.