Redundancy, redundancy, redundancy: the three keys to highly robust anatomical parsing in medical images

  • Authors:
  • Xiang Sean Zhou;Zhigang Peng;Yiqiang Zhan;Maneesh Dewan;Bing Jian;Arun Krishnan;Yimo Tao;Martin Harder;Stefan Grosskopf;Ute Feuerlein

  • Affiliations:
  • Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Malvern, PA, USA;Siemens Healthcare, Erlangen, Germany;Siemens Healthcare, Forchheim, Germany;Siemens Healthcare, Forchheim, Germany

  • Venue:
  • Proceedings of the international conference on Multimedia information retrieval
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

Although redundancy reduction is the key for visual coding in the mammalian visual system [1,2], at a higher level, the visual understanding step, a central component of intelligence, achieves high robustness by exploiting redundancies in the images, in order to resolve uncertainty, ambiguity, or contradiction [3,4]. In this paper, an algorithmic framework, Learning Ensembles of Anatomical Patterns (LEAP), is presented for the purpose of automatic localization and parsing of human anatomy from medical images. It achieves high robustness by exploiting statistical redundancies at three levels: the anatomical level, the parts-whole level, and the voxel level in the scale space. The recognition-by-parts intuition is formulated in a more principled way as a spatial ensemble, with added redundancy and less parameter tuning for medical imaging applications. Different use cases were tested using 2D and 3D medical images, including X-ray, CT, and MRI images, for different purposes such as view identication, organ and body parts localization, and MR imaging plane detection. LEAP is shown to significantly outperform existing methods or its "non-redundant" counterparts.