The anatomy of a large-scale hypertextual Web search engine
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Efficient crawling through URL ordering
WWW7 Proceedings of the seventh international conference on World Wide Web 7
Synchronizing a database to improve freshness
SIGMOD '00 Proceedings of the 2000 ACM SIGMOD international conference on Management of data
Keeping Up with the Changing Web
Computer
Internet Search Engine Freshness by Web Server Help
SAINT '01 Proceedings of the 2001 Symposium on Applications and the Internet (SAINT 2001)
Cooperation Schemes between a Web Server and a Web Search Engine
LA-WEB '03 Proceedings of the First Conference on Latin American Web Congress
User-centric content freshness metrics for search engines
Proceedings of the 18th international conference on World wide web
Hi-index | 0.00 |
Current search engines maintain a local repository to improve the search efficiency. A crawler is used to periodically poll the remote web pages to update the contents of the local repository. Due to the resource limitations, some local pages may be stale. To maintain the high freshness of the repository, the crawler is expected to revisit remote web pages in optimized order and frequency. The intuitive metric of freshness of the local repository is defined as the fraction of up-to-date web pages in the repository, which is merely based on the repository content, and does not, unfortunately, reflect the perspective of the search engine users, e.g., how often is a web page queried? We propose a novel weighted metric of the repository freshness with the importance of web pages being the weights. This metric not only takes into account the local web pages themselves but also the perspectives of the search engine users. We study the repository synchronization policy under this new metric, compare this metric with others, analyze its features, and discuss how the web page importance is determined.