The Recognition of Human Movement Using Temporal Templates
IEEE Transactions on Pattern Analysis and Machine Intelligence
ACM SIGGRAPH 2005 Papers
Curvature-Based Transfer Functions for Direct Volume Rendering: Methods and Applications
Proceedings of the 14th IEEE Visualization 2003 (VIS'03)
IEEE Transactions on Pattern Analysis and Machine Intelligence
Motion overview of human actions
ACM SIGGRAPH Asia 2008 papers
Human Activity Recognition with Metric Learning
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Assigning cameras to subjects in video surveillance systems
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Unstructured video-based rendering: interactive exploration of casually captured videos
ACM SIGGRAPH 2010 papers
View-Independent Action Recognition from Temporal Self-Similarities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Viewpoint quality and scene understanding
VAST'05 Proceedings of the 6th International conference on Virtual Reality, Archaeology and Intelligent Cultural Heritage
Proceedings of the 4th ACM Multimedia Systems Conference
Hi-index | 0.00 |
In many scenarios a dynamic scene is filmed by multiple video cameras located at different viewing positions. Visualizing such multi-view data on a single display raises an immediate question--which cameras capture better views of the scene? Typically, (e.g. in TV broadcasts) a human producer manually selects the best view. In this paper we wish to automate this process by evaluating the quality of a view, captured by every single camera. We regard human actions as three-dimensional shapes induced by their silhouettes in the space-time volume. The quality of a view is then evaluated based on features of the space-time shape, which correspond with limb visibility. Resting on these features, two view quality approaches are proposed. One is generic while the other can be trained to fit any preferred action recognition method. Our experiments show that the proposed view selection provide intuitive results which match common conventions. We further show that it improves action recognition results.