Visual feature extraction via eye tracking for saliency driven 2D/3D registration

  • Authors:
  • Adrian James Chung;Fani Deligianni;Xiao-Peng Hu;Guang-Zhong Yang

  • Affiliations:
  • Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College London;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College London;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College London;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College London

  • Venue:
  • Proceedings of the 2004 symposium on Eye tracking research & applications
  • Year:
  • 2004

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper presents a new technique for extracting visual saliency from experimental eye tracking data. An eye-tracking system is employed to determine which features that a group of human observers considered to be salient when viewing a set of video images. With this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. By using a feature normalisation process based on the relative abundance of visual features within the background image and those dwelled on eye tracking scan paths, features related to visual attention are determined. These features are then back projected to the image domain to determine spatial areas of interest for unseen video images. The strengths and weaknesses of the method are demonstrated with feature correspondence for 2D to 3D image registration of endoscopy videos with computed tomography data. The biologically derived saliency map is employed to provide an image similarity measure that forms the heart of the 2D/3D registration method. It is shown that by only processing selective regions of interest as determined by the saliency map, rendering overhead can be greatly reduced. Significant improvements in pose estimation efficiency can be achieved without apparent reduction in registration accuracy when compared to that of using a non-saliency based similarity measure.