CHI '86 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Making computers easier for older adults to use: area cursors and sticky icons
Proceedings of the ACM SIGCHI Conference on Human factors in computing systems
An experimental study of future “natural” multimodal human-computer interaction
CHI '93 INTERACT '93 and CHI '93 Conference Companion on Human Factors in Computing Systems
Visual interpretation of hand gestures as a practical interface modality
Visual interpretation of hand gestures as a practical interface modality
Implications for a gesture design tool
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Principles of mixed-initiative user interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
A computational architecture for conversation
UM '99 Proceedings of the seventh international conference on User modeling
A Flexible New Technique for Camera Calibration
IEEE Transactions on Pattern Analysis and Machine Intelligence
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Gesture navigation: an alternative 'back' for the future
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Robot Vision
Detection and Estimation of Pointing Gestures in Dense Disparity Maps
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Real-Time Self-Calibrating Stereo Person Tracking Using 3-D Shape Estimation from Blob Features
ICPR '96 Proceedings of the International Conference on Pattern Recognition (ICPR '96) Volume III-Volume 7276 - Volume 7276
Utilization of Stereo Disparity and Optical Flow Information fro Human Interaction
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
The ALIVE system: wireless, full-body interaction with autonomous agents
Multimedia Systems - Special issue on multimedia and multisensory virtual worlds
A study of manual gesture-based selection for the PEMMI multimodal transport management interface
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Collaborating in context: immersive visualisation environments
Proceedings of the international workshop in conjunction with AVI 2006 on Context in advanced interfaces
An FPGA-based smart camera for gesture recognition in HCI applications
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
Imaginary interfaces: spatial interaction with empty hands and without visual feedback
UIST '10 Proceedings of the 23nd annual ACM symposium on User interface software and technology
GeeAir: a universal multimodal remote control device for home appliances
Personal and Ubiquitous Computing
iCrux: an artificially intelligent virtual screen technology
Proceedings of the International Conference & Workshop on Emerging Trends in Technology
Design and evaluation of freehand menu selection interfaces using tilt and pinch gestures
International Journal of Human-Computer Studies
TV remote control using human hand motion based on optical flow system
ICCSA'12 Proceedings of the 12th international conference on Computational Science and Its Applications - Volume Part III
Hi-index | 0.00 |
Perceptual user interfaces promise modes of fluid computer-human interaction that complement the mouse and keyboard, and have been especially motivated in non-desktop scenarios, such as kiosks or smart rooms. Such interfaces, however, have been slow to see use for a variety of reasons, including the computational burden they impose, a lack of robustness outside the laboratory, unreasonable calibration demands, and a shortage of sufficiently compelling applications. We address these difficulties by using a fast stereo vision algorithm for recognizing hand positions and gestures. Our system uses two inexpensive video cameras to extract depth information. This depth information enhances automatic object detection and tracking robustness, and may also be used in applications. We demonstrate the algorithm in combination with speech recognition to perform several basic window management tasks, report on a user study probing the ease of using the system, and discuss the implications of such a system for future user interfaces.