Behavioural constraints on animate vision
Image and Vision Computing - 4th Alvey Vision Meeting
Autonomous Robots
Perceiving and recognizing three-dimensional forms
Perceiving and recognizing three-dimensional forms
Alternative essences of intelligence
AAAI '98/IAAI '98 Proceedings of the fifteenth national/tenth conference on Artificial intelligence/Innovative applications of artificial intelligence
Tracking and Learning Graphs and Pose on Image Sequences of Faces
FG '96 Proceedings of the 2nd International Conference on Automatic Face and Gesture Recognition (FG '96)
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
Non-Intrusive Gaze Tracking Using Artificial Neural Networks
A Binocular, Foveated Active Vision System
A Binocular, Foveated Active Vision System
Example Based Learning for View-Based Human Face Detection
Example Based Learning for View-Based Human Face Detection
Real-Time Gaze Holding in Binocular Robot Vision
Real-Time Gaze Holding in Binocular Robot Vision
Journal of Cognitive Neuroscience
Robust real-time face tracking and gesture recognition
IJCAI'97 Proceedings of the Fifteenth international joint conference on Artifical intelligence - Volume 2
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
Detecting Faces in Images: A Survey
IEEE Transactions on Pattern Analysis and Machine Intelligence
Challenges in building robots that imitate people
Imitation in animals and artifacts
MEXI: machine with emotionally eXtended intelligence
Design and application of hybrid intelligent systems
Robust real-time face tracker for cluttered environments
Computer Vision and Image Understanding
Pre-Attentive and Attentive Detection of Humans in Wide-Field Scenes
International Journal of Computer Vision
Welfare interface implementation using multiple facial features tracking for the disabled people
Pattern Recognition Letters
Eye tracking based interaction with 3d reconstructed objects
MM '08 Proceedings of the 16th ACM international conference on Multimedia
Real-time emotion recognition using biologically inspired models
AVBPA'03 Proceedings of the 4th international conference on Audio- and video-based biometric person authentication
"shooting a bird": game system using facial feature for the handicapped people
HCI'07 Proceedings of the 12th international conference on Human-computer interaction: intelligent multimodal interaction environments
The evolution of communication systems by adaptive agents
Adaptive agents and multi-agent systems
Family facial patch resemblance extraction
ACCV'10 Proceedings of the 10th Asian conference on Computer vision - Volume Part II
Computer interface to use eye and mouse movement
MMM'07 Proceedings of the 13th International conference on Multimedia Modeling - Volume Part II
Eye tracking using neural network and mean-shift
ICCSA'06 Proceedings of the 2006 international conference on Computational Science and Its Applications - Volume Part III
Welfare interface using multiple facial features tracking
AI'06 Proceedings of the 19th Australian joint conference on Artificial Intelligence: advances in Artificial Intelligence
Detection of human faces in a compressed domain for video stratification
The Visual Computer: International Journal of Computer Graphics
Hi-index | 0.00 |
Eye finding is the first step toward building a machine that can recognize social cues, like eye contact and gaze direction, in a natural context. In this paper, we present a real-time implementation of an eye finding algorithm for a foveated active vision system. The system uses a motion-based prefilter to identify potential face locations. These locations are analyzed for faces with a template-based algorithm developed by Sinha (1996). Detected faces are tracked in real time, and the active vision system saccades to the face using a learned sensorimotor mapping. Once gaze has been centered on the face, a high-resolution image of the eye can be captured from the foveal camera using a self-calibrated peripheral-ta-foveal mapping.We also present a performance analysis of Sinha's ratio template algorithm on a standard set of static face images. Although this algorithm performs relatively poorly on static images, this result is a poor indicator of real-time performance of the behaving system. We find that our system finds eyes in 94% of a set of behavioral trials. We suggest that alternate means of evaluating behavioral systems are necessary.