An Online Learning Framework for Refining Recency Search Results with User Click Feedback

  • Authors:
  • Taesup Moon;Wei Chu;Lihong Li;Zhaohui Zheng;Yi Chang

  • Affiliations:
  • Yahoo! Labs;Yahoo! Labs;Yahoo! Labs;Yahoo! Labs;Yahoo! Labs

  • Venue:
  • ACM Transactions on Information Systems (TOIS)
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional machine-learned ranking systems for Web search are often trained to capture stationary relevance of documents to queries, which have limited ability to track nonstationary user intention in a timely manner. In recency search, for instance, the relevance of documents to a query on breaking news often changes significantly over time, requiring effective adaptation to user intention. In this article, we focus on recency search and study a number of algorithms to improve ranking results by leveraging user click feedback. Our contributions are threefold. First, we use commercial search engine sessions collected in a random exploration bucket for reliable offline evaluation of these algorithms, which provides an unbiased comparison across algorithms without online bucket tests. Second, we propose an online learning approach that reranks and improves the search results for recency queries near real-time based on user clicks. This approach is very general and can be combined with sophisticated click models. Third, our empirical comparison of a dozen algorithms on real-world search data suggests importance of a few algorithmic choices in these applications, including generalization across different query-document pairs, specialization to popular queries, and near real-time adaptation of user clicks for reranking.