Affect-based adaptive presentation of home videos

  • Authors:
  • Xiaohong Xiang;Mohan S. Kankanhalli

  • Affiliations:
  • National University Of Singapore, Singapore, Singapore;National University Of Singapore, Singapore, Singapore

  • Venue:
  • MM '11 Proceedings of the 19th ACM international conference on Multimedia
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In recent times, the proliferation of multimedia devices and reduced costs of data storage have enabled people to easily record and collect a large number of home videos; furthermore, this collection is growing with time. With the popularity of participatory media such as YouTube and facebook, problems are encountered when people intend to share their home videos with others. The first problem is that different people might be interested in different video content. Given the numbers of home videos, it is a time-consuming and hard task to manually select proper content for people with different interests. Secondly, as short videos are becoming more and more popular in media sharing applications, people need to manually cut and edit home videos which is again a tedious task. In this paper, we propose a method that employs affective analysis to automatically create video presentations from home videos. Our novel method adaptively creates presentations based on three properties: emotional tone, local main character and global main character. A novel sparsity-based affective labeling method is proposed to identify the emotional content of the videos. The local and global main characters are determined by applying face recognition in each shot. To demonstrate the proposed method, three kinds of presentations are created for family, acquaintance and outsider. Experimental results show that our method is very effective in video sharing and the users are satisfied with the videos generated by our method.