Incorporating visual field characteristics into a saliency map

  • Authors:
  • Hideyuki Kubota;Yusuke Sugano;Takahiro Okabe;Yoichi Sato;Akihiro Sugimoto;Kazuo Hiraki

  • Affiliations:
  • The University of Tokyo;The University of Tokyo;The University of Tokyo;The University of Tokyo;National Institute of Informatics;The University of Tokyo

  • Venue:
  • Proceedings of the Symposium on Eye Tracking Research and Applications
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Characteristics of the human visual field are well known to be different in central (fovea) and peripheral areas. Existing computational models of visual saliency, however, do not take into account this biological evidence. The existing models compute visual saliency uniformly over the retina and, thus, have difficulty in accurately predicting the next gaze (fixation) point. This paper proposes to incorporate human visual field characteristics into visual saliency, and presents a computational model for producing such a saliency map. Our model integrates image features obtained by bottom-up computation in such a way that weights for the integration depend on the distance from the current gaze point where the weights are optimally learned using actual saccade data. The experimental results using a large number of fixation/saccade data with wide viewing angles demonstrate the advantage of our saliency map, showing that it can accurately predict the point where one looks next.