Crawlets: Agents for High Performance Web Search Engines
MA '01 Proceedings of the 5th International Conference on Mobile Agents
A Weighted Freshness Metric for Maintaining Search Engine Local Repository
WI '04 Proceedings of the 2004 IEEE/WIC/ACM International Conference on Web Intelligence
A co-operative web services paradigm for supporting crawlers
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Hi-index | 0.00 |
We study how to keep the Internet search engines up-to-date with the changes occurring at the various web servers in the Internet. Currently, web search engines poll the web servers on a per-URL basis for obtaining update information. We advocate an approach in which web servers themselves track the changes happening to their content files for propagating updates to search engines. We propose an algorithm which uses both freshness and popularity of data at the web servers for deciding the discrepancy between a web site and a search engine. This algorithm batches the push of updates from the web server to the search engine. We prove that this algorithm is competitive with an optimal algorithm.