Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Efficient crawling through URL ordering
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Using mobile crawlers to search the Web efficiently
ACIS International Journal of Computer & Information Science
Keeping Up with the Changing Web
Computer
Security for Mobile Agents: Authentication and State Appraisal
ESORICS '96 Proceedings of the 4th European Symposium on Research in Computer Security: Computer Security
How to Integrate Mobile Agents into Web Servers
WET-ICE '97 Proceedings of the 6th Workshop on Enabling Technologies on Infrastructure for Collaborative Enterprises
Protecting Mobile Agents Against Malicious Hosts
Mobile Agents and Security
Internet Search Engine Freshness by Web Server Help
SAINT '01 Proceedings of the 2001 Symposium on Applications and the Internet (SAINT 2001)
Observation of changing information sources
Observation of changing information sources
Knowledge processing in contact centers using a multi-agent architecture
WSEAS Transactions on Computers
A multi-agent approach for distributed knowledge processing in contact centers
ICCOMP'10 Proceedings of the 14th WSEAS international conference on Computers: part of the 14th WSEAS CSCC multiconference - Volume I
Evaluating mobile agent platform security
MATES'06 Proceedings of the 4th German conference on Multiagent System Technologies
Hi-index | 0.00 |
Some of the reasons for unsatisfactory performance of today's search engines are their centralized approach to web crawling and lack of explicit support from web servers. We propose a modification to conventional crawling in which a search engine uploads simple agents, called crawlets, to web sites. A crawlet crawls pages at a site locally and sends a compact summary back to the search engine. This not only reduces bandwidth requirements and network latencies, but also parallelizes crawling. Crawlets also provide an effective means for achieving the performance gains of personalized web servers, and can make up for the lack of cooperation from conventional web servers. The specialized nature of crawlets allows simple solutions to security and resource control problems, and reduces software requirements at participating web sites. In fact, we propose an implementation that requires no changes to web servers, but only the installation of a few (active) web pages at host sites.