Sitemaps: above and beyond the crawl of duty

  • Authors:
  • Uri Schonfeld;Narayanan Shivakumar

  • Affiliations:
  • UCLA Computer Science Department, Los Angeles, CA, USA;Google Inc., Mountain View, CA, USA

  • Venue:
  • Proceedings of the 18th international conference on World wide web
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

Comprehensive coverage of the public web is crucial to web search engines. Search engines use crawlers to retrieve pages and then discover new ones by extracting the pages' outgoing links. However, the set of pages reachable from the publicly linked web is estimated to be significantly smaller than the invisible web, the set of documents that have no incoming links and can only be retrieved through web applications and web forms. The Sitemaps protocol is a fast-growing web protocol supported jointly by major search engines to help content creators and search engines unlock this hidden data by making it available to search engines. In this paper, we perform a detailed study of how "classic" discovery crawling compares with Sitemaps, in key measures such as coverage and freshness over key representative websites as well as over billions of URLs seen at Google. We observe that Sitemaps and discovery crawling complement each other very well, and offer different tradeoffs.