View-Invariant Representation and Recognition of Actions
International Journal of Computer Vision
International Journal of Computer Vision
ICCV '05 Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2
Free viewpoint action recognition using motion history volumes
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
Learning to Recognize Activities from the Wrong View Point
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
A survey on vision-based human action recognition
Image and Vision Computing
IEEE Transactions on Knowledge and Data Engineering
Human action recognition using multiple views: a comparative perspective on recent developments
J-HGBU '11 Proceedings of the 2011 joint ACM workshop on Human gesture and behavior understanding
Discriminative virtual views for cross-view action recognition
CVPR '12 Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
Action recognition using invariant features under unexampled viewing conditions
Proceedings of the 21st ACM international conference on Multimedia
Hi-index | 0.00 |
We present a novel transfer learning approach to cross-camera action recognition. Inspired by canonical correlation analysis (CCA), we first extract the spatio-temporal visual words from videos captured at different views, and derive a correlation subspace as a joint representation for different bag-of-words models at different views. Different from prior CCA-based approaches which simply train standard classifiers such as SVM in the resulting subspace, we explore the domain transfer ability of CCA in the correlation subspace, in which each dimension has a different capability in correlating source and target data. In our work, we propose a novel SVM with a correlation regularizer which incorporates such ability into the design of the SVM. Experiments on the IXMAS dataset verify the effectiveness of our method, which is shown to outperform state-of-the-art transfer learning approaches without taking such domain transfer ability into consideration.