Escaping local minima through hierarchical model selection: Automatic object discovery, segmentation, and tracking in video

  • Authors:
  • Nebojsa Jojic;John Winn;Larry Zitnick

  • Affiliations:
  • Microsoft Research;Microsoft Research;Microsoft Research

  • Venue:
  • CVPR '06 Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition - Volume 1
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Recently, the generative modeling approach to video segmentation has been gaining popularity in the computer vision community. For example, the flexible sprites framework has been studied in, among other references, [11,13,14,24]. In general, detailed generative models are vulnerable to intractability of inference and local minima problems when approximations are made (see, e.g., [25]). Recent approaches to dealing with these problems focused on inference techniques for increasingly more expressive models. Simpler models, on the other hand, while less precise, are often not just faster, but less prone to local minima. In addition, while many different models may be based on similar hidden variables, some models may be more amenable to inference of some of the shared variables, while other models lead to efficient and accurate inference of other components of the hierarchical data description. In this paper, we empirically illustrate that forcing multiple models to share the posterior distribution leads to inference less prone to local minima. We define a set of key hidden variables that describe aspects of the data that we care about. The relationships among these key variables are defined through multiple conditional distribution models on the same pairs of variables, controlled by switch variables. The posterior distribution over the key hidden variables is shared, and inference of the switch variables serves as a mechanism for combinatorial model selection. The key observation here is that while the most expressive model often ends up a winner by the end of the iterative learning of model parameters, early iterations are dominated by simpler model components, and upon convergence, the free energy is lower than the ones reached by switching on all the most complex components from the beginning of the learning. We illustrate the performance of this approach on the unsupervised video segmentation task.