Pfinder: Real-Time Tracking of the Human Body
IEEE Transactions on Pattern Analysis and Machine Intelligence
Computer
Multiple-Human Tracking Using Multiple Cameras
FG '98 Proceedings of the 3rd. International Conference on Face & Gesture Recognition
Automatic View Selection in Multi-View Object Recognition
ICPR '00 Proceedings of the International Conference on Pattern Recognition - Volume 1
Dynamic Time Warping for Off-Line Recognition of a Small Gesture Vocabulary
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
An Evolutionary System Development Approach in A Pervasive Computing Environment
CW '04 Proceedings of the 2004 International Conference on Cyberworlds
Real-Time Gesture Recognition by Learning and Selective Control of Visual Interest Points
IEEE Transactions on Pattern Analysis and Machine Intelligence
Multi-View Classifier Swarms for Pedestrian Detection and Tracking
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Workshops - Volume 03
Tracking Using Dynamic Programming for Appearance-Based Sign Language Recognition
FGR '06 Proceedings of the 7th International Conference on Automatic Face and Gesture Recognition
Continuous Gesture Recognition using a Sparse Bayesian Classifier
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
HMM-based Human Action Recognition Using Multiview Image Sequences
ICPR '06 Proceedings of the 18th International Conference on Pattern Recognition - Volume 01
Computer
On the Path Towards Autonomic Computing: Combining Swarm Intelligence and Excitable Media Models
MICAI '08 Proceedings of the 2008 Seventh Mexican International Conference on Artificial Intelligence
Hand posture recognition using real-time artificial evolution
EvoApplicatons'10 Proceedings of the 2010 international conference on Applications of Evolutionary Computation - Volume Part I
Hi-index | 0.00 |
Since a gesture involves a dynamic and complex motion, multiview observation and recognition are desirable. For the better representation of gestures, one needs to know, in the first place, from which views a gesture should be observed. Furthermore, it becomes increasingly important how the recognition results are integrated when larger numbers of camera views are considered. To investigate these problems, we propose a framework under which multiview recognition is carried out, and an integration scheme by which the recognition results are integrated online and in realtime. For performance evaluation, we use the ViHASi (Virtual Human Action Silhouette) public image database as a benchmark and our Japanese sign language (JSL) image database that contains 18 kinds of hand signs. By examining the recognition rates of each gesture for each view, we found gestures that exhibit view dependency and the gestures that do not. Also, we found that the view dependency itself could vary depending on the target gesture sets. By integrating the recognition results of different views, our swarm-based integration provides more robust and better recognition performance than individual fixed-view recognition agents.