Segmenting highly articulated video objects with weak-prior random forests

  • Authors:
  • Hwann-Tzong Chen;Tyng-Luh Liu;Chiou-Shann Fuh

  • Affiliations:
  • Institute of Information Science, Academia Sinica, Taipei, Taiwan;Institute of Information Science, Academia Sinica, Taipei, Taiwan;Department of CSIE, National Taiwan University, Taipei, Taiwan

  • Venue:
  • ECCV'06 Proceedings of the 9th European conference on Computer Vision - Volume Part IV
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We address the problem of segmenting highly articulated video objects in a wide variety of poses. The main idea of our approach is to model the prior information of object appearance via random forests. To automatically extract an object from a video sequence, we first build a random forest based on image patches sampled from the initial template. Owing to the nature of using a randomized technique and simple features, the modeled prior information is considered weak, but on the other hand appropriate for our application. Furthermore, the random forest can be dynamically updated to generate prior probabilities about the configurations of the object in subsequent image frames. The algorithm then combines the prior probabilities with low-level region information to produce a sequence of figure-ground segmentations. Overall, the proposed segmentation technique is useful and flexible in that one can easily integrate different cues and efficiently select discriminating features to model object appearance and handle various articulations.