Contextual Video Recommendation by Multimodal Relevance and User Feedback

  • Authors:
  • Tao Mei;Bo Yang;Xian-Sheng Hua;Shipeng Li

  • Affiliations:
  • Microsoft Research Asia;University of Southern California;Microsoft Research Asia;Microsoft Research Asia

  • Venue:
  • ACM Transactions on Information Systems (TOIS)
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

With Internet delivery of video content surging to an unprecedented level, video recommendation, which suggests relevant videos to targeted users according to their historical and current viewings or preferences, has become one of most pervasive online video services. This article presents a novel contextual video recommendation system, called VideoReach, based on multimodal content relevance and user feedback. We consider an online video usually consists of different modalities (i.e., visual and audio track, as well as associated texts such as query, keywords, and surrounding text). Therefore, the recommended videos should be relevant to current viewing in terms of multimodal relevance. We also consider that different parts of videos are with different degrees of interest to a user, as well as different features and modalities have different contributions to the overall relevance. As a result, the recommended videos should also be relevant to current users in terms of user feedback (i.e., user click-through). We then design a unified framework for VideoReach which can seamlessly integrate both multimodal relevance and user feedback by relevance feedback and attention fusion. VideoReach represents one of the first attempts toward contextual recommendation driven by video content and user click-through, without assuming a sufficient collection of user profiles available. We conducted experiments over a large-scale real-world video data and reported the effectiveness of VideoReach.