Extraction of visual features with eye tracking for saliency driven 2D/3D registration

  • Authors:
  • Adrian J. Chung;Fani Deligianni;Xiao-Peng Hu;Guang-Zhong Yang

  • Affiliations:
  • Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College, 180 Queen's Gate, SW7 2BZ London, UK;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College, 180 Queen's Gate, SW7 2BZ London, UK;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College, 180 Queen's Gate, SW7 2BZ London, UK;Royal Society/Wolfson Foundation Medical Image Computing Laboratory, Department of Computing, Imperial College, 180 Queen's Gate, SW7 2BZ London, UK

  • Venue:
  • Image and Vision Computing
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new technique for deriving information on visual saliency with experimental eye-tracking data. The strength and potential pitfalls of the method are demonstrated with feature correspondence for 2D to 3D image registration. With this application, an eye-tracking system is employed to determine which features in endoscopy video images are considered to be salient from a group of human observers. By using this information, a biologically inspired saliency map is derived by transforming each observed video image into a feature space representation. Features related to visual attention are determined by using a feature normalisation process based on the relative abundance of image features within the background image and those dwelled on visual search scan paths. These features are then back-projected to the image domain to determine spatial area of interest for each unseen endoscopy video image. The derived saliency map is employed to provide an image similarity measure that forms the heart of a new 2D/3D registration method with much reduced rendering overhead by only processing-selective regions of interest as determined by the saliency map. Significant improvements in pose estimation efficiency are achieved without apparent reduction in registration accuracy when compared to that of using an intensity-based similarity measure.