Group recommendations via multi-armed bandits

  • Authors:
  • José Bento;Stratis Ioannidis;S. Muthukrishnan;Jinyun Yan

  • Affiliations:
  • Stanford University, stanford, CA, USA;Technicolor, Palo Alto, CA, USA;Rutgers University, New Brunswick, Piscataway, NJ, USA;Rutgers University, New Brunswick, Piscataway, NJ, USA

  • Venue:
  • Proceedings of the 21st international conference companion on World Wide Web
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

We study recommendations for persistent groups that repeatedly engage in a joint activity. We approach this as a multi-arm bandit problem. We design a recommendation policy and show it has logarithmic regret. Our analysis also shows that regret depends linearly on d, the size of the underlying persistent group. We evaluate our policy on movie recommendations over the MovieLens and MoviePilot datasets.