Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking
IEEE Transactions on Pattern Analysis and Machine Intelligence
Contour Tracking by Stochastic Propagation of Conditional Density
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Tracking with the EM Contour Algorithm
ECCV '02 Proceedings of the 7th European Conference on Computer Vision-Part I
Real Time Visual Cues Extraction for Monitoring Driver Vigilance
ICVS '01 Proceedings of the Second International Workshop on Computer Vision Systems
Eye Typing using Markov and Active Appearance Models
WACV '02 Proceedings of the Sixth IEEE Workshop on Applications of Computer Vision
Separating corneal reflections for illumination estimation
Neurocomputing
Tracking Iris contour with a 3D eye-model for gaze estimation
ACCV'07 Proceedings of the 8th Asian conference on Computer vision - Volume Part I
Eyes and cameras calibration for 3D world gaze detection
ICVS'08 Proceedings of the 6th international conference on Computer vision systems
Viewpoint interpolation using an ellipsoid head model for video teleconferencing
ISVC'05 Proceedings of the First international conference on Advances in Visual Computing
Hi-index | 0.00 |
The goal of this work is using off-the-shelf components for gaze-based interaction, with focus on eye typing. Avoiding the use of dedicated hardware such as IR light emitters makes eye tracking significantly more difficult and requires robust methods capable of handling large changes in image quality. We employ an active-contour method to obtain robust iris tracking. The main strength of the method is that the contour model avoids explicit feature detection: contours are simply assumed to remove statistical dependencies on opposite sides of the contour. The contour model is utilized in an approach combining particle filtering with the EM algorithm. The method is robust against light changes and camera defocusing. For the purpose of determining where the user is looking calibrations is usually needed. The number of calibration points used in different methods varies from from a few to several thousands, depending on the prior knowledge used on the setup and equipment. We examine basic properties of gaze determination when the geometry of the the camera, screen and user is unknown. In particular we present a lower bound on the number of calibration points needed for gaze determination on planar objects, and we examine degenerate configurations. Based on this lower bound we apply a simple calibration procedure, to facilitate button selections for fast on-screen typing.