What you look at is what you get: eye movement-based interaction techniques
CHI '90 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
New technological windows into mind: there is more in eyes and brains for human-computer interaction
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Manual and gaze input cascaded (MAGIC) pointing
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
The GAZE groupware system: mediating joint attention in multiparty communication and collaboration
Proceedings of the SIGCHI conference on Human Factors in Computing Systems
Evaluating look-to-talk: a gaze-aware interface in a collaborative environment
CHI '02 Extended Abstracts on Human Factors in Computing Systems
Eye-R, a glasses-mounted eye motion detection interface
CHI '01 Extended Abstracts on Human Factors in Computing Systems
Interacting with groups of computers
Communications of the ACM
EyePliances: attention-seeking devices that respond to visual attention
CHI '03 Extended Abstracts on Human Factors in Computing Systems
ECSGlasses and EyePliances: using attention to open sociable windows of interaction
Proceedings of the 2004 symposium on Eye tracking research & applications
EyeDraw: a system for drawing pictures with the eyes
CHI '04 Extended Abstracts on Human Factors in Computing Systems
A multimodal learning interface for grounding spoken language in sensory perceptions
ACM Transactions on Applied Perception (TAP)
Augmenting and sharing memory with eyeBlog
Proceedings of the the 1st ACM workshop on Continuous archival and retrieval of personal experiences
Use of eye movements for video game control
Proceedings of the 2006 ACM SIGCHI international conference on Advances in computer entertainment technology
OZCHI '06 Proceedings of the 18th Australia conference on Computer-Human Interaction: Design: Activities, Artefacts and Environments
Exploring adaptive dialogue based on a robot's awareness of human gaze and task progress
Proceedings of the ACM/IEEE international conference on Human-robot interaction
Reality-based interaction: a framework for post-WIMP interfaces
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
A Fitts Law comparison of eye tracking and manual input in the selection of visual targets
ICMI '08 Proceedings of the 10th international conference on Multimodal interfaces
The Attentive Hearing Aid: Eye Selection of Auditory Sources for Hearing Impaired Users
INTERACT '09 Proceedings of the 12th IFIP TC 13 International Conference on Human-Computer Interaction: Part I
PERVASIVE'07 Proceedings of the 5th international conference on Pervasive computing
Communications of the ACM
Gaze movement inference for user adapted image annotation and retrieval
SBNMA '11 Proceedings of the 2011 ACM workshop on Social and behavioural networked media access
Development of an eye-tracking control system using AForge.NET framework
International Journal of Intelligent Systems Technologies and Applications
SideWays: a gaze interface for spontaneous interaction with situated displays
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems
Gaze locking: passive eye contact detection for human-object interaction
Proceedings of the 26th annual ACM symposium on User interface software and technology
Pursuit calibration: making gaze calibration less tedious and more flexible
Proceedings of the 26th annual ACM symposium on User interface software and technology
Hi-index | 0.02 |
We introduce ViewPointer, a wearable eye contact sensor that detects deixis towards ubiquitous computers embedded in real world objects. ViewPointer consists of a small wearable camera no more obtrusive than a common Bluetooth headset. ViewPointer allows any real-world object to be augmented with eye contact sensing capabilities, simply by embedding a small infrared (IR) tag. The headset camera detects when a user is looking at an infrared tag by determining whether the reflection of the tag on the cornea of the user's eye appears sufficiently central to the pupil. ViewPointer not only allows any object to become an eye contact sensing appliance, it also allows identification of users and transmission of data to the user through the object. We present a novel encoding scheme used to uniquely identify ViewPointer tags, as well as a method for transmitting URLs over tags. We present a number of scenarios of application as well as an analysis of design principles. We conclude eye contact sensing input is best utilized to provide context to action.