What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Virtual environments and advanced interface design
Virtual environments and advanced interface design
Interacting with eye movements in virtual environments
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Intelligent gaze-added interfaces
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Cognitive styles and virtual environments
Journal of the American Society for Information Science - Special topic issue: individual differences in virtual environments
Computer Graphics and Virtual Environments: From Realism to Real - Time
Computer Graphics and Virtual Environments: From Realism to Real - Time
Eye Tracking Methodology: Theory and Practice
Eye Tracking Methodology: Theory and Practice
Clustering Data Streams: Theory and Practice
IEEE Transactions on Knowledge and Data Engineering
An Algorithm for Real-Time Stereo Vision Implementation of Head Pose and Gaze Direction Measurement
FG '00 Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition 2000
Eye Gaze Tracking under Natural Head Movements
CVPR '05 Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05) - Volume 1 - Volume 01
EyePoint: practical pointing and selection using gaze and keyboard
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Improving the accuracy of gaze input for interaction
Proceedings of the 2008 symposium on Eye tracking research & applications
An overview of clustering methods
Intelligent Data Analysis
Eye gaze tracking techniques for interactive applications
Computer Vision and Image Understanding - Special issue on eye detection and tracking
A novel approach to 3-D gaze tracking using stereo cameras
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
Virtual manufacturing environments need complex and accurate 3D human-computer interaction. One main problem of current virtual environments (VEs) is the heavy overloads of the users on both cognitive and motor operational aspects. This paper investigated multimodal intent delivery and intent inferring in virtual environments. Eye gazing modality is added into virtual assembly system. Typical intents expressed by dual hands and eye gazing modalities are designed. The reliability and accuracy of eye gazing modality is examined through experiments. The experiments showed that eye gazing and hand multimodal cooperation has a great potential to enhance the naturalness and efficiency of human-computer interaction (HCI).