ACM Computing Surveys (CSUR)
Principles of distributed database systems (2nd ed.)
Principles of distributed database systems (2nd ed.)
The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
SPHINX: a framework for creating personal, site-specific Web crawlers
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Efficient crawling through URL ordering
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Focused crawling: a new approach to topic-specific Web resource discovery
WWW '99 Proceedings of the eighth international conference on World Wide Web
Parallel permutation and sorting algorithms and a new generalized connection network
Journal of the ACM (JACM)
ACM Computing Surveys (CSUR)
Breadth-first crawling yields high-quality pages
Proceedings of the 10th international conference on World Wide Web
Proceedings of the 11th international conference on World Wide Web
Mercator: A scalable, extensible Web crawler
World Wide Web
Design and evaluation of a multi-agent collaborative Web mining system
Decision Support Systems - Web retrieval and mining
The Evolution of the Web and Implications for an Incremental Crawler
VLDB '00 Proceedings of the 26th International Conference on Very Large Data Bases
MAGE: An Agent-Oriented Programming Environment
ICCI '04 Proceedings of the Third IEEE International Conference on Cognitive Informatics
Hi-index | 0.00 |
Web spider is a widely used approach to obtain information for search engines. As the size of the Web grows, it becomes a natural choice to parallelize the spider's crawling process. However, parallel execution often causes redundant web pages to occupy vast storing space. How to solve this problem becomes a significant issue for the design of next generation web spiders. In this paper, we employ the method from multi-agent coordination to design a parallel spider model and implement it on the multi-agent platform MAGE. Through the control of central facilitator agent, spiders can coordinate each other to avoid redundant pages in the web page search process. Experiment results demonstrate that it is very effective to improve the collection efficiency and can eliminate redundant pages with a tiny efficiency cost.