What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Pick-and-drop: a direct manipulation technique for multiple computer environments
Proceedings of the 10th annual ACM symposium on User interface software and technology
Code space: touch + air gesture hybrid interactions for supporting developer meetings
Proceedings of the ACM International Conference on Interactive Tabletops and Surfaces
Extending the visual field of a head-mounted eye tracker for pervasive eye-based interaction
Proceedings of the Symposium on Eye Tracking Research and Applications
A cross-device interaction style for mobiles and surfaces
Proceedings of the Designing Interactive Systems Conference
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Hi-index | 0.00 |
The shared displays in our environment contain content that we desire. Furthermore, we often acquire content for a specific purpose, i.e., the acquisition of a phone number to place a call. We have developed a content transfer concept, Eye Drop. Eye Drop provides techniques that allow fluid content acquisition, transfer from shared displays, and local positioning on personal devices using gaze combined with manual input. The eyes naturally focus on content we desire. Our techniques use gaze to point remotely, removing the need for explicit pointing on the user's part. A manual trigger from a personal device confirms selection. Transfer is performed using gaze or manual input to smoothly transition content to a specific location on a personal device. This work demonstrates how techniques can be applied to acquire and apply actions to content through a natural sequence of interaction. We demonstrate a proof of concept prototype through five implemented application scenarios.