Multi-temporal globally-optimal dense 3-d cell segmentation and tracking from multi-photon time-lapse movies of live tissue microenvironments

  • Authors:
  • Arunachalam Narayanaswamy;Amine Merouane;Antonio Peixoto;Ena Ladi;Paul Herzmark;Ulrich Von Andrian;Ellen Robey;Badrinath Roysam

  • Affiliations:
  • Google Inc., Mountain View, CA;University of Houston, Houston, TX;Institut of Pharmacology and Structural Biology, UMR 5089, France;Dept of Immunology, Genentech, South San Francisco, CA;University of California, Berkeley, CA;Harvard Medical School, Boston, MA;University of California, Berkeley, CA;University of Houston, Houston, TX

  • Venue:
  • STIA'12 Proceedings of the Second international conference on Spatio-temporal Image Analysis for Longitudinal and Time-Series Image Data
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Living immune system microenvironments can be imaged by timelapse multi-photon multi-spectral microscopy to reveal the complex tissue architecture and cell movements. Automated segmentation and tracking of these motile, numerous, and densely packed cells over long-duration 3-D movies is needed to sense and quantify subtle phenotypic differences between genetically modified and wild type cells. We present a novel multi-temporal 3-D cell tracking algorithm that: (i) implicitly models and corrects segmentation errors by exploiting spatio-temporal continuity, (ii) computes globally-optimal second-order correspondences through second-order matching in a directed hypergraph, (iii) does not require any manual initialization, and (iv) utilizes a trainable nonparametric motion model using smooth kernel density estimation. The tracking problem is formulated as a second-order hyperedge selection problem in a directed hypergraph, and solved using branch-and-cut integer programming. A quantitative study on four real datasets containing 3,361 cells showed that our algorithm reduces segmentation errors by 53% post-tracking compared to independent segmentations in every frame. In comparison, Jaqaman et al.'s u-track [1] algorithm eliminates only 38% of the segmentation errors. We found the error rate of our tracking algorithm to be 2.23%, 28.8% lesser than u-track's error rate while comparing 7,213 track correspondences.