Motion segmentation using a k-nearest-neighbor-based fusion procedure of spatial and temporal label cues

  • Authors:
  • Pierre-Marc Jodoin;Max Mignotte

  • Affiliations:
  • Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec;Département d'Informatique et de Recherche Opérationnelle (DIRO), Université de Montréal, Montréal, Québec

  • Venue:
  • ICIAR'05 Proceedings of the Second international conference on Image Analysis and Recognition
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Traditional motion segmentation techniques generally depend on a pre-estimated optical flow. Unfortunately, the lack of precision over edges of most popular motion estimation methods makes them unsuited to recover the exact shape of moving objects. In this contribution, we present an original motion segmentation technique using a K-nearest-neighbor-based fusion of spatial and temporal label cues. Our fusion model takes as input a spatial segmentation of a still image and an estimated version of the motion label field. It minimizes an energy function made of spatial and temporal label cues extracted from the two input fields. The algorithm proposed is intuitive, simple to implement and remains sufficiently general to be applied to other segmentation problems. Furthermore, the method doesn't depend on the estimation of any threshold or any weighting function between the spatial and temporal energy terms, as is sometimes required by energy-based segmentation models. Experiments on synthetic and real image sequences indicate that the proposed method is robust and accurate.