Visual interpretation of hand gestures as a practical interface modality
Visual interpretation of hand gestures as a practical interface modality
Perceptual user interfaces: things that see
Communications of the ACM
Force-to-motion functions for pointing
INTERACT '90 Proceedings of the IFIP TC13 Third Interational Conference on Human-Computer Interaction
Wide-Range, Person- and Illumination-Insensitive Head Orientation Estimation
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Head Gestures for Computer Control
RATFG-RTS '01 Proceedings of the IEEE ICCV Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems (RATFG-RTS'01)
Vision-based control of 3D facial animation
Proceedings of the 2003 ACM SIGGRAPH/Eurographics symposium on Computer animation
A multimodal perceptual user interface for video-surveillance environments
ICMI '05 Proceedings of the 7th international conference on Multimodal interfaces
Interacting with large displays from a distance with vision-tracked multi-finger gestural input
Proceedings of the 18th annual ACM symposium on User interface software and technology
Non-visual game design and training in gameplay skill acquisition - A puzzle game case study
Interacting with Computers
Vision-Based Hand Gesture Interactions for Large LCD-TV Display Tabletop Systems
PCM '08 Proceedings of the 9th Pacific Rim Conference on Multimedia: Advances in Multimedia Information Processing
Ptz control with head tracking for video chat
CHI '09 Extended Abstracts on Human Factors in Computing Systems
Dynamically reconfigurable vision-based user interfaces
ICVS'03 Proceedings of the 3rd international conference on Computer vision systems
Functional gestures for human-environment interaction
HCI'13 Proceedings of the 15th international conference on Human-Computer Interaction: interaction modalities and techniques - Volume Part IV
Hi-index | 0.00 |
Computer Vision and other direct sensing technologies have progressed to the point where we can detect many aspects of a user's activity reliably and in real time. Simply recognizing the activity is not enough, however. If perceptual interaction is going to become a part of the user interface, we must turn our attention to the tasks we wish to perform and methods to effectively perform them.This paper attempts to further our understanding of vision-based interaction by looking at the steps involved in building practical systems, giving examples from several existing systems. We classify the types of tasks well suited to this type of interaction as pointing, control or selection, and discuss interaction techniques for each class. We address the factors affecting the selection of the control action, and various types of control signals that can be extracted from visual input. We present our design for widgets to perform different types of tasks, and techniques, similar to those used with established user interface devices, to give the user the type of control they need to perform the task well. We look at ways to combine individual widgets into Visual Interfaces that allow the user to perform these tasks both concurrently and sequentially.