BatGaze: a new tool to measure depth features at the center of gaze during free viewing

  • Authors:
  • Redwan Abdo A. Mohammed;Samah Abdulfatah Mohammed;Lars Schwabe

  • Affiliations:
  • Dept. of Computer Science and Electrical Engineering, Adaptive and Regenerative Software Systems, Universität Rostock, Rostock, Germany;Dept. of Computer Science and Electrical Engineering, Adaptive and Regenerative Software Systems, Universität Rostock, Rostock, Germany;Dept. of Computer Science and Electrical Engineering, Adaptive and Regenerative Software Systems, Universität Rostock, Rostock, Germany

  • Venue:
  • BI'12 Proceedings of the 2012 international conference on Brain Informatics
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The human visual system still outperforms any artificial vision system in terms of generalization. One approach to build human-level artificial vision systems is to formulate learning principles and then let the artificial system self-organize under natural stimulation. More specifically, learning probabilistic generative models of visual signals is considered a promising approach. The latent variables in such models may then eventually turn out to reflect meaningful aspects of scene descriptions such as features of depths images, surface properties, etc. Using luminance and depth information, which itself is only indirectly available to the human visual system, is a promising approach to learn artificial vision systems. So far, the proper stimulus material was only available from a few studies employing range imaging in static scenes, but the luminance and depth features at the center of gaze during free viewing are simply not know. We combined mobile eye tracking and depth imaging in order to record such missing stimulus material. Here we report on this newly developed system, called BatGaze, and on a first experimental validation of it. We believe that it will become a very valuable tool for mapping the visual environment of free viewing humans.