Creating music videos using automatic media analysis
Proceedings of the tenth ACM international conference on Multimedia
Motion-based selection of relevant video segments for video summarisation
ICME '03 Proceedings of the 2003 International Conference on Multimedia and Expo - Volume 1
Less talk, more rock: automated organization of community-contributed collections of concert videos
Proceedings of the 18th international conference on World wide web
Stitching videos streamed by mobile phones in real-time
MM '09 Proceedings of the 17th ACM international conference on Multimedia
Deploying mobile multimedia services for everyday experience sharing
ICME'09 Proceedings of the 2009 IEEE international conference on Multimedia and Expo
MoVi: mobile phone based video highlights via collaborative sensing
Proceedings of the 8th international conference on Mobile systems, applications, and services
Automatic mashup generation from multiple-camera concert recordings
Proceedings of the international conference on Multimedia
We want more: human-computer collaboration in mobile social video remixing of music concerts
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Automatic generation of video narratives from shared UGC
Proceedings of the 22nd ACM conference on Hypertext and hypermedia
Multimodal Event Detection in User Generated Videos
ISM '11 Proceedings of the 2011 IEEE International Symposium on Multimedia
Camera Motion-Based Analysis of User Generated Video
IEEE Transactions on Multimedia
Video as memorabilia: user needs for collaborative automatic mobile video production
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
In this work we propose to exploit context sensor data for analyzing user generated videos. Firstly, we perform a low-level indexing of the recorded media with the instantaneous compass orientations of the recording device. Subsequently, we exploit the low level indexing to obtain a higher level indexing for discovering camera panning movements, classifying them, and for identifying the Region of Interest (ROI) of the recorded event. Thus, we extract information about the content without performing content analysis but by leveraging sensor data analysis. Furthermore, we develop an automatic remixing system that exploits the obtained high-level indexing for producing a video remix. We show that the proposed sensor-based analysis can correctly detect and classify camera panning and identify the ROI; in addition, we provide examples of their application to automatic video remixing.