Robust industrial control: optimal design approach for polynomial systems
Robust industrial control: optimal design approach for polynomial systems
Pfinder: real-time tracking of the human body
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
A tutorial on ν-support vector machines: Research Articles
Applied Stochastic Models in Business and Industry - Statistical Learning
Head gesture recognition in intelligent interfaces: the role of context in improving recognition
Proceedings of the 11th international conference on Intelligent user interfaces
Modeling people: vision-based understanding of a person's shape, appearance, movement, and behaviour
Computer Vision and Image Understanding - Special issue on modeling people: Vision-based understanding of a person's shape, appearance, movement, and behaviour
Real-time human action recognition by luminance field trajectory analysis
MM '08 Proceedings of the 16th ACM international conference on Multimedia
TV remote control using human hand motion based on optical flow system
ICCSA'12 Proceedings of the 12th international conference on Computational Science and Its Applications - Volume Part III
Hi-index | 0.00 |
We present a new human motion recognition technique for a hands-free user interface. Although many motion recognition technologies for video sequences have been reported, no man-machine interface that recognizes enough variety of motions has been developed. The difficulty was the lack of spatial information that could be acquired from video sequences captured by a normal camera. The proposed system uses a depth image in addition to a normal grayscale image from a time-of-flight camera that measures the depth to objects, so various motions are accurately recognized. The main functions of this system are gesture recognition and posture measurement. The former is performed using the bag-of-words approach. The trajectories of tracked key points around the human body are used as features in this approach. The main technical contribution of the proposed method is the use of 3.5D spatiotemporal trajectory features, which contain horizontal, vertical, time, and depth information. The latter is obtained through face detection and object tracking technology. The proposed user interface is useful and natural because it does not require any contact-type devices, such as a motion sensor controller. The effectiveness of the proposed 3.5D spatiotemporal features was confirmed through a comparative experiment with conventional 3.0D spatiotemporal features. The generality of the system was proven by an experiment with multiple people. The usefulness of the system as a pointing device was also proven by a practical simulation.