Human action recognition using star skeleton
Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks
Real-time boundary detection for cricket game
Proceedings of the 3rd Australasian conference on Interactive entertainment
Shape-Based Human Activity Recognition Using Independent Component Analysis and Hidden Markov Model
IEA/AIE '08 Proceedings of the 21st international conference on Industrial, Engineering and Other Applications of Applied Intelligent Systems: New Frontiers in Applied Artificial Intelligence
Learning to Recognize Activities from the Wrong View Point
ECCV '08 Proceedings of the 10th European Conference on Computer Vision: Part I
Human Activity Recognition Using the 4D Spatiotemporal Shape Context Descriptor
ISVC '09 Proceedings of the 5th International Symposium on Advances in Visual Computing: Part II
Human Action Recognition Based on Spatio-temporal Features
PReMI '09 Proceedings of the 3rd International Conference on Pattern Recognition and Machine Intelligence
Human behavior analysis based on a new motion descriptor
IEEE Transactions on Circuits and Systems for Video Technology
View-independent human action recognition with Volume Motion Template on single stereo camera
Pattern Recognition Letters
ICIP'09 Proceedings of the 16th IEEE international conference on Image processing
Independent shape component-based human activity recognition via Hidden Markov Model
Applied Intelligence
Fitting distal limb segments for accurate skeletonization in human action recognition
Journal of Ambient Intelligence and Smart Environments
Affective and cognitive design for mass personalization: status and prospect
Journal of Intelligent Manufacturing
Hi-index | 0.00 |
Recognizing human activities from image sequences is an active area of research in computer vision. Most of the previous work on activity recognition focuses on recognition from a single view and ignores the issue of view invariance. In this paper, we present a view invariant human activity recognition approach that uses both motion and shape information for activity representation. For each frame in the video, a 128 dimensional optical flow vector of the region of interest is used to represent the motion of the human body, and a 90 dimensional eigen-shape vector is used to represent the shape. Each activity is represented by a set of Hidden Markov Models (HMMs), where each model represents the activity from a different viewing direction, to realize view-invariance recognition. Experiments on a database of video clips of different activities show that our method is robust.