The mathematics of statistical machine translation: parameter estimation
Computational Linguistics - Special issue on using large corpora: II
Free viewpoint action recognition using motion history volumes
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Learning to Recognize Activities from the Wrong View Point
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Human Activity Recognition with Metric Learning
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Cross-View Action Recognition from Temporal Self-similarities
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part II
An iterative image registration technique with an application to stereo vision
IJCAI'81 Proceedings of the 7th international joint conference on Artificial intelligence - Volume 2
Advances in view-invariant human motion analysis: a review
IEEE Transactions on Systems, Man, and Cybernetics, Part C: Applications and Reviews
View-Independent Action Recognition from Temporal Self-Similarities
IEEE Transactions on Pattern Analysis and Machine Intelligence
Cross-view action recognition via view knowledge transfer
CVPR '11 Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition
Hi-index | 0.00 |
In this paper, we propose an approach for human action recognition from different views in a knowledge transfer framework. Each frame in an action is considered as a sentence in an article. We believe that, though the appearance for the same action is quite different in different views, there exists some translation relationship between them. To abstract the relationship, we use the IBM Model 1 in statistical machine translation and the translation probabilities for vocabularies in the source view to those in the target view can be obtained from the training data. Consequently, we can translate an action based on the maximum a posteriori criterion. We validated our method on the public multi-view IXMAS dataset and obtained promising results compared to the state-of-the-art knowledge transfer based methods.