An anatomical equivalence class based joint transformation-residual descriptor for morphological analysis

  • Authors:
  • Sajjad Baloch;Ragini Verma;Christos Davatzikos

  • Affiliations:
  • University of Pennsylvania, Philadelphia, PA;University of Pennsylvania, Philadelphia, PA;University of Pennsylvania, Philadelphia, PA

  • Venue:
  • IPMI'07 Proceedings of the 20th international conference on Information processing in medical imaging
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Existing approaches to computational anatomy assume that a perfectly conforming diffeomorphism applied to an anatomy of interest captures its morphological characteristics relative to a template. However, biological variability renders this task extremely difficult, if possible at all in many cases. Consequently, the information not reflected by the transformation, is lost permanently from subsequent analysis. We establish that this residual information is highly significant for characterizing subtle morphological variations and is complementary to the transformation. The amount of residual, in turn, depends on transformation parameters, such as its degree of regularization as well as on the template. We, therefore, present a methodology that measures morphological characteristics via a lossless morphological descriptor, based on both the residual and the transformation. Since there are infinitely many [transformation, residual] pairs that reconstruct a given anatomy, which collectively form a nonlinear manifold embedded in a high-dimensional space, we treat them as members of an Anatomical Equivalence Class (AEC). A unique and optimal representation, according to a certain criterion, of each individual anatomy is then selected from the corresponding AEC, by solving an optimization problem. This process effectively determines the optimal template and transformation parameters for each individual anatomy, and removes respective confounding variation in the data. Based on statistical tests on synthetic 2D images and real 3D brain scans with simulated atrophy, we show that this approach provides significant improvement over descriptors based solely on a transformation, in addition to being nearly independent of the choice of the template.