A Feature-Based, Robust, Hierarchical Algorithm for Registering Pairs of Images of the Curved Human Retina

  • Authors:
  • Ali Can;Charles V. Stewart;Badrinath Roysam;Howard L. Tanenbaum

  • Affiliations:
  • Rensselaer Polytechnic Institute, Troy, NY;Rensselaer Polytechnic Institute, Troy, NY;Rensselaer Polytechnic Institute, Troy, NY;The Center for Sight, Albany, NY

  • Venue:
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Year:
  • 2002

Quantified Score

Hi-index 0.14

Visualization

Abstract

This paper describes a robust hierarchical algorithm for fully-automatic registration of a pair of images of the curved human retina photographed by a fundus microscope. Accurate registration is essential for mosaic synthesis, change detection, and design of computer-aided instrumentation. Central to the new algorithm is a 12-parameter interimage transformation derived by modeling the retina as a rigid quadratic surface with unknown parameters, imaged by an uncalibrated weak perspective camera. The parameters of this model are estimated by matching vascular landmarks extracted by an algorithm that recursively traces the blood vessel structure. The parameter estimation technique, which could be generalized to other applications, is a hierarchy of models and methods: an initial match set is pruned based on a zeroth order transformation estimated as the peak of a similarity-weighted histogram; a first order, affine transformation is estimated using the reduced match set and least-median of squares; and the final, second order, 12-parameter transformation is estimated using an M-estimator initialized from the first order estimate. This hierarchy makes the algorithm robust to unmatchable image features and mismatches between features caused by large interframe motions. Before final convergence of the M-estimator, feature positions are refined and the correspondence set is enhanced using normalized sum-of-squared differences matching of regions deformed by the emerging transformation. Experiments involving 3,000 image pairs (1,024\times1,024 pixels) from 16 different healthy eyes were performed. Starting with as low as 20 percent overlap between images, the algorithm improves its success rate exponentially and has a negligible failure rate above 67 percent overlap. The experiments also quantify the reduction in errors as the model complexities increase. Final registration errors less than a pixel are routinely achieved. The speed, accuracy, and ability to handle small overlaps compare favorably with retinal image registration techniques published in the literature.