Improving on-demand learning to rank through parallelism

  • Authors:
  • Daniel Xavier De Sousa;Thierson Couto Rosa;Wellington Santos Martins;Rodrigo Silva;Marcos André Gonçalves

  • Affiliations:
  • Instituto Federal de Goiás, Anápolis, Brazil;Instituto de Informática, UFG, Goiânia, Brazil;Instituto de Informática, UFG, Goiânia, Brazil;Departamento de Ciência da Computação, UFMG, Belo Horizonte, Brazil;Departamento de Ciência da Computação, UFMG, Belo Horizonte, Brazil

  • Venue:
  • WISE'12 Proceedings of the 13th international conference on Web Information Systems Engineering
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional Learning to Rank (L2R) is usually conducted in a batch mode in which a single ranking function is learned to order results for future queries. This approach is not flexible since future queries may differ considerably from those present in the training set and, consequently, the learned function may not work properly. Ideally, a distinct learning function should be learned on demand for each query. Nevertheless, on-demand L2R may significantly degrade the query processing time, as the ranking function has to be learned on-the-fly before it can be applied. In this paper we present a parallel implementation of an on-demand L2R technique that reduces drastically the response time of previous serial implementation. Our implementation makes use of thousands of threads of a GPU to learn a ranking function for each query, and takes advantage of a reduced training set obtained through active learning. Experiments with the LETOR benchmark show that our proposed approach achieves a mean speedup of 127x in query processing time when compared to the sequential version, while producing very competitive ranking effectiveness.