3D point of gaze estimation using head-mounted RGB-D cameras
Proceedings of the 14th international ACM SIGACCESS conference on Computers and accessibility
Multi-modal interfaces for control of assistive robotic devices
Proceedings of the 14th ACM international conference on Multimodal interaction
Multi-modal object of interest detection using eye gaze and RGB-D cameras
Proceedings of the 4th Workshop on Eye Gaze in Intelligent Human Machine Interaction
Human-Motion saliency in complex scenes
GW'11 Proceedings of the 9th international conference on Gesture and Sign Language in Human-Computer Interaction and Embodied Communication
3D attention: measurement of visual saliency using eye tracking glasses
CHI '13 Extended Abstracts on Human Factors in Computing Systems
FACTS - a computer vision system for 3D recovery and semantic mapping of human factors
ICVS'13 Proceedings of the 9th international conference on Computer Vision Systems
Hi-index | 0.00 |
A novel approach to 3D gaze estimation for wearable multi-camera devices is proposed and its effectiveness is demonstrated both theoretically and empirically. The proposed approach, firmly grounded on the geometry of the multiple views, introduces a calibration procedure that is efficient, accurate, highly innovative but also practical and easy. Thus, it can run online with little intervention from the user. The overall gaze estimation model is general, as no particular complex model of the human eye is assumed in this work. This is made possible by a novel approach, that can be sketched as follows: each eye is imaged by a camera; two conics are fitted to the imaged pupils and a calibration sequence, consisting in the subject gazing a known 3D point, while moving his/her head, provides information to 1) estimate the optical axis in 3D world; 2) compute the geometry of the multi-camera system; 3) estimate the Point of Regard in 3D world. The resultant model is being used effectively to study visual attention by means of gaze estimation experiments, involving people performing natural tasks in wide-field, unstructured scenarios.