The Representation Space Paradigm of Concurrent Evolving Object Descriptions
IEEE Transactions on Pattern Analysis and Machine Intelligence - Special issue on interpretation of 3-D scenes—part II
Visual Interpretation of Hand Gestures for Human-Computer Interaction: A Review
IEEE Transactions on Pattern Analysis and Machine Intelligence
N-Ocular stereo for real-time human tracking
Panoramic vision
Real-Time Target Localization and Tracking by N-Ocular Stereo
OMNIVIS '00 Proceedings of the IEEE Workshop on Omnidirectional Vision
Vision: A Computational Investigation into the Human Representation and Processing of Visual Information
Distributed vision system: a perceptual information infrastructure for robot navigation
IJCAI'97 Proceedings of the 15th international joint conference on Artifical intelligence - Volume 1
A System of Card Type Battery-Less Information Terminal: CardBIT for Situated Interaction
PERCOM '03 Proceedings of the First IEEE International Conference on Pervasive Computing and Communications
Baseline Detection and Localization for Invisible Omnidirectional Cameras
International Journal of Computer Vision - Special Issue on Omni-Directional Research in Japan
Scientific Issues Concerning Androids
International Journal of Robotics Research
International Journal of Wireless and Mobile Computing
Hi-index | 0.00 |
This paper proposes a new model for gesture recognition. The model, called view and motion -based aspect models (VAMBAM), is an omnidirectional view-based aspect model based on motion-based segmentation. This model realizes location-free and rotation-free gesture recognition with a distributed omnidirectional vision system (DOVS). The distributed vision system consisting of multiple omnidirectional cameras is a prototype of a perceptual information infrastructure for monitoring and recognizing the real world. In addition to the concept of VABAM, this paper shows how the model realizes robust and real-time visual recognition of the DOVS.