Personalized content adaptation using multimodal highlights of soccer video

  • Authors:
  • Shenghong Hu

  • Affiliations:
  • School of Computer Science & Technology, Huazhong University of Science & Technology, Wuhan, China and Computer School, Hubei University of Economics, Wuhan, China

  • Venue:
  • PCM'10 Proceedings of the 11th Pacific Rim conference on Advances in multimedia information processing: Part I
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

The personalized video adaptation may be the most inspiring weapon to solve the contradiction between the optimized unified accessing in internet occupied by massive video contents and limited client's resources, but how to predict semantic utility which links the semantic contents to video adaptation is not a well settled problem. The primary contributions of the paper include following. The multimodal highlights of soccer video are extracted with the affective content analysis; a unified semantic utility model for multi-level summary and frame dropping based transcoding; a Multiple-choice Multi-dimensional Knapsack Problem (MMKP) is proposed to solve the optimized semantic utility for personalized adaptation under the constraints of client's resources. The experimental results show that our method on multimodal highlights has achieved reasonable accuracy among most of semantic events, and MMKP based solution proves better performance on optimized semantic events consumption than 0/1 Knapsack Problem (0/1KP) based solution.