Vision in natural and virtual environments

  • Authors:
  • Mary M. Hayhoe;Dana H. Ballard;Jochen Triesch;Hiroyuki Shinoda;Pilar Aivar;Brian Sullivan

  • Affiliations:
  • University of Rochester, Rochester, NY;University of Rochester, Rochester, NY;University of California, La Jolla, CA;Ritsumeikan University, Kusatsu Shiga, 525-8577, Japan;University of Oviedo, Oviedo, Asturias 33003, Spain;University of Rochester, Rochester, NY

  • Venue:
  • ETRA '02 Proceedings of the 2002 symposium on Eye tracking research & applications
  • Year:
  • 2002

Quantified Score

Hi-index 0.00

Visualization

Abstract

Our knowledge of the way that the visual system operates in everyday behavior has, until recently, been very limited. This information is critical not only for understanding visual function, but also for understanding the consequences of various kinds of visual impairment, and for the development of interfaces between human and artificial systems. The development of eye trackers that can be mounted on the head now allows monitoring of gaze without restricting the observer's movements. Observations of natural behavior have demonstrated the highly task-specific and directed nature of fixation patterns, and reveal considerable regularity between observers. Eye, head, and hand coordination also reveals much greater flexibility and task-specificity than previously supposed. Experimental examination of the issues raised by observations of natural behavior requires the development of complex virtual environments that can be manipulated by the experimenter at critical points during task performance. Experiments where we monitored gaze in a simulated driving environment demonstrate that visibility of task relevant information depends critically on active search initiated by the observer according to an internally generated schedule, and this schedule depends on learnt regularities in the environment. In another virtual environment where observers copied toy models we showed that regularities in the spatial structure are used by observers to control eye movement targeting. Other experiments in a virtual environment with haptic feedback show that even simple visual properties like size are not continuously available or processed automatically by the visual system, but are dynamically acquired and discarded according to the momentary task demands.