The response of eye-movement and pupil size to audio instruction while viewing a moving target
ETRA '00 Proceedings of the 2000 symposium on Eye tracking research & applications
IEEE Transactions on Pattern Analysis and Machine Intelligence
Retinal vision applied to facial features detection and face authentication
Pattern Recognition Letters - In memory of Professor E.S. Gelsema
A context-based architecture for general problem solving
ICSAB Proceedings of the seventh international conference on simulation of adaptive behavior on From animals to animats
Hi-index | 0.00 |
Visual cognition depends critically on the moment-to-moment orientation of gaze. Gaze is changed by saccades, rapid eye movements that orient the fovea over targets of interest in a visual scene. Saccades are ballistic; a prespecified target location is computed prior to the movement and visual feedback is precluded. Once a target is fixated, gaze is typically held for about 300 milliseconds, although it can be held for both longer and shorter intervals. Despite these distinctive properties, there has been no specific computational model of the gaze targeting strategy employed by the human visual system during visual cognitive tasks. This paper proposes such a model that uses iconic scene representations derived from oriented spatiochromatic filters at multiple scales. Visual search for a target object proceeds in a coarse-to-fine fashion with the target''s largest scale filter responses being compared first. Task-relevant target locations are represented as saliency maps which are used to program eye movements. Once fixated, targets are remembered by using spatial memory in the form of object-centered maps. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in natural visual search tasks. Experimental results indicate excellent agreement between eye movements predicted by the model and those recorded from human subjects.