A novel non-intrusive eye gaze estimation using cross-ratio under large head motion
Computer Vision and Image Understanding - Special issue on eye detection and tracking
Voice recognition technology for visual artists with disabilities in their upper limbs
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
3D display based on motion parallax using non-contact 3D measurement of head position
OZCHI '05 Proceedings of the 17th Australia conference on Computer-Human Interaction: Citizens Online: Considerations for Today and the Future
The drive to create: an investigation of tools to support disabled artists
Proceedings of the 6th ACM SIGCHI conference on Creativity & cognition
Real-time eye-gaze estimation by using a virtual reference point
ISCGAV'05 Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision
A novel non-intrusive eye gaze estimation using cross-ratio under large head motion
Computer Vision and Image Understanding - Special issue on eye detection and tracking
Hi-index | 0.00 |
In this paper, a new kind of human-computer interface allowing three-dimensional (3-D) visualization of multimedia objects and eye controlled interaction is proposed. In order to explore the advantages and limitations of the concept, a prototype system has been set up. The testbed includes a visual operating system for integrating novel forms of interaction with a 3-D graphic user interface, autostereoscopic (free-viewing) 3-D displays with close adaptation to the mechanisms of binocular vision, and solutions for nonintrusive eye-controlled interaction (video-based head and gaze tracking). The paper reviews the system's key components and outlines various applications implemented for user testing. Preliminary results show that most of the users are impressed by a 3-D graphic user interface and the possibility to communicate with a computer by simply looking at the object of interest. On the other hand, the results emphasize the need for a more intelligent interface agent to avoid misinterpretation of the user's eye-controlled input and to reset undesired activities