Parametric Hidden Markov Models for Gesture Recognition
IEEE Transactions on Pattern Analysis and Machine Intelligence
Human Body Model Acquisition and Tracking Using Voxel Data
International Journal of Computer Vision
Implicit Probabilistic Models of Human Motion for Synthesis and Tracking
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Model-Based Silhouette Extraction for Accurate People Tracking
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part II
Gait Sequence Analysis Using Frieze Patterns
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part II
Learning the Statistics of People in Images and Video
International Journal of Computer Vision - Special Issue on Computational Vision at Brown University
International Journal of Computer Vision
3D Skeleton-Based Body Pose Recovery
3DPVT '06 Proceedings of the Third International Symposium on 3D Data Processing, Visualization, and Transmission (3DPVT'06)
Hi-index | 0.00 |
Multi-camera networks bring in potentials for a variety of vision-based applications through provisioning of rich visual information. In this paper a method of image segmentation for human gesture analysis in multi-camera networks is presented. Aiming to employ manifold sources of visual information provided by the network, an opportunistic fusion framework is described and incorporated in the proposed method for gesture analysis. A 3D human body model is employed as the converging point of spatiotemporal and feature fusion. It maintains both geometric parameters of the human posture and the adaptively learned appearance attributes, all of which are updated from the three dimensions of space, time and features of the opportunistic fusion. In sufficient confidence levels parameters of the 3D human body model are again used as feedback to aid subsequent vision analysis. The 3D human body model also serves as an intermediate level for gesture interpretation in different applications. The image segmentation method described in this paper is part of the gesture analysis problem. It aims to reduce raw visual data in a single camera to concise descriptions for more efficient communication between cameras. Color distribution registered in the model is used to initialize segmentation. Perceptually Organized Expectation Maximization (POEM) is then applied to refine color segments with observations from a single camera. Finally ellipse fitting is used to parameterize segments. Experimental results for segmentation are illustrated. Some examples for skeleton fitting based on the elliptical segments will also be shown to demonstrate motivation and capability of the model-based segmentation approach for multi-view human gesture analysis.