Self-learning segmentation and classification of cell-nuclei in 3D volumetric data using voxel-wise gray scale invariants

  • Authors:
  • Janis Fehr;Olaf Ronneberger;Haymo Kurz;Hans Burkhardt

  • Affiliations:
  • Institut für Informatik, Lehrstuhl für Mustererkennung und Bildverarbeitung, Albert-Ludwigs-Universität Freiburg, Freiburg, Deutschland;Institut für Informatik, Lehrstuhl für Mustererkennung und Bildverarbeitung, Albert-Ludwigs-Universität Freiburg, Freiburg, Deutschland;Institut für Anatomie und Zell Biologie, Albert-Ludwigs-Universität Freiburg, Deutschland;Institut für Informatik, Lehrstuhl für Mustererkennung und Bildverarbeitung, Albert-Ludwigs-Universität Freiburg, Freiburg, Deutschland

  • Venue:
  • PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

We introduce and discuss a new method for segmentation and classification of cells from 3D tissue probes. The anisotropic 3D volumetric data of fluorescent marked cell nuclei is recorded by a confocal laser scanning microscope (LSM). Voxel-wise gray scale features (see accompaning paper [1][2]) ), invariant towards 3D rotation of its neighborhood, are extracted from the original data by integrating over the 3D rotation group with non-linear kernels. In an interactive process, support-vector machine models are trained for each cell type using user relevance feedback. With this reference database at hand, segmentation and classification can be achieved in one step, simply by classifying each voxel and performing a connected component labelling, automatically without further human interaction. This general approach easily allows adoption of other cell types or tissue structures just by adding new training samples and re-training the model. Experiments with datasets from chicken chorioallantoic membrane show encouraging results.