A co-operative web services paradigm for supporting crawlers
Large Scale Semantic Access to Content (Text, Image, Video, and Sound)
Supporting distributed search in virtual worlds
OCSC'13 Proceedings of the 5th international conference on Online Communities and Social Computing
Development of an intelligent distributed news retrieval system
International Journal of Knowledge-based and Intelligent Engineering Systems
Hi-index | 0.00 |
Web crawler design presents many different challenges: architecture, strategies, performance and more. One of the most important research topics concerns improving the selection of"interesting" web pages (for the user), according to importance metrics. Another relevant point is content freshness, i.e. maintaining freshness and consistency of temporary stored copies. For this, the crawler periodically repeats its activity going over stored contents (re-crawling process). In this paper, we propose a scheme to permit a crawler to acquire information about the global state of a website before the crawling process takes place. This scheme requires web server cooperation in order to collect and publish information on its content, useful for enabling a crawler to tune its visit strategy. If this information is unavailable or not updated the crawler still acts in the usual manner. In this sense the proposed scheme is not invasive and is independent from any crawling strategy and architecture.