Figure-ground separation by cue integration

  • Authors:
  • Xiangyu Tang;Christoph von der Malsburg

  • Affiliations:
  • Computer Science Department, University of Southern California, Los Angeles, CA, 90089, U.S.A. tangx@organic.usc.edu;Frankfurt Institute for Advanced Studies, 60438, Frankfurt am Main, Germany, and Computer Science Department, University of Southern California, Los Angeles, CA 90089, U.S.A. malsburg@organic.usc. ...

  • Venue:
  • Neural Computation
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

This letter presents an improved cue integration approach to reliably separate coherent moving objects from their background scene in video sequences. The proposed method uses a probabilistic framework to unify bottom-up and top-down cues in a parallel, “democratic” fashion. The algorithm makes use of a modified Bayes rule where each pixel's posterior probabilities of figure or ground layer assignment are derived from likelihood models of three bottom-up cues and a prior model provided by a top-down cue. Each cue is treated as independent evidence for figure-ground separation. They compete with and complement each other dynamically by adjusting relative weights from frame to frame according to cue quality measured against the overall integration. At the same time, the likelihood or prior models of individual cues adapt toward the integrated result. These mechanisms enable the system to organize under the influence of visual scene structure without manual intervention. A novel contribution here is the incorporation of a top-down cue. It improves the system's robustness and accuracy and helps handle difficult and ambiguous situations, such as abrupt lighting changes or occlusion among multiple objects. Results on various video sequences are demonstrated and discussed. (Video demos are available at http://organic.usc.edu:8376/∼tangx/neco/index.html.)