Exploiting self-adaptive posture-based focus estimation for lecture video editing
Proceedings of the 13th annual ACM international conference on Multimedia
Video editing based on object movement and camera motion
Proceedings of the working conference on Advanced visual interfaces
ACM Transactions on Multimedia Computing, Communications, and Applications (TOMCCAP)
Hi-index | 0.00 |
This paper presents a gesture based driven approach for video editing. Given a lecture video, we adopt novel approaches to automatically detect and synchronize its content with electronic slides. The gestures in each synchronized topic (or shot) are then tracked and recognized continuously. By registering shots and slides and recovering their transformation, the regions where the gestures take place can be known. Based on the recognized gestures and their registered positions, the information in slides can be seamlessly extracted, not only to assist video editing, but also to enhance the quality of original lecture video.