Personalized video adaptation based on video content analysis

  • Authors:
  • Min Xu;Jesse S. Jin;Suhuai Luo

  • Affiliations:
  • University of Newcastle, Callaghan, Australia;University of Newcastle, Callaghan, Australia;University of Newcastle, Callaghan, Australia

  • Venue:
  • Proceedings of the 9th International Workshop on Multimedia Data Mining: held in conjunction with the ACM SIGKDD 2008
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

Personalized video adaptation is expected to satisfy individual users' needs on video content. Multimedia data mining plays a significant role of video annotation to meet users' preference on video content. In this paper, a comprehensive solution for personalized video adaptation is proposed based on video content mining. Video content mining targets both cognitive content and affective content. Cognitive content refers to those semantic events, which are very specific for the video domains. Sometimes, users might prefer "emotional decision" to select their interested video content. Therefore, we introduce affective content which causes audiences' strong reactions. For cognitive content mining, features are extracted from multiple modalities. Machine learning module is further performed to get some middle-level features, such as specific audio sounds, semantic video shots and so on. Those middle-level features are used to detect cognitive content by using Hidden Markov Models. For affective content mining, affective content is detected with three affective levels: "low", "medium" and "high". Considering affective levels might have no sharp boundaries, fuzzy c mean clustering is used on low-level features to simulate user's perceptions. The adaptation is later implemented based on MPEG-21 Digital Item Adaptation framework. One of the challenges is how to quantify users' preference on video content. Information Entropy (IE) and Membership Functions are calculated to decide priorities for resource allocation for cognitive content and affective content respectively.