Towards Interactive Generation of "Ground-truth" in Background Subtraction from Partially Labeled Examples

  • Authors:
  • E. Grossmann;A. Kale;C. Jaynes

  • Affiliations:
  • Department of Computer Scicence and Center for Visualization and Virtual Environments, University of Kentucky, Lexington KY 40507. etienne@cs.uky.edu;Dept. of Electr. & Comput. Eng., Carnegie Mellon Univ., Pittsburgh, PA, USA;Corp. Res. Adv. Eng. Multimedia, Robert Bosch GmbH, Stuttgart, Germany

  • Venue:
  • ICCCN '05 Proceedings of the 14th International Conference on Computer Communications and Networks
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

Ground truth segmentation of foreground and background is important for performance evaluation of existing techniques and can guide principled development of video analysis algorithms. Unfortunately, generating ground truth data is a cumbersome and incurs a high cost in human labor. In this paper, we propose an interactive method to produce foreground/background segmentation of video sequences captured by a stationary camera, that requires comparatively little human labor, while still producing high quality results. Given a sequence, the user indicates, with a few clicks in a GUI, a few rectangular regions that contain only foreground or background pixels. Adaboost then builds a classifier that combines the output of a set of weak classifiers. The resulting classifier is run on the remainder of the sequence. Based on the results and the accuracy requirements, the user can then select more example regions for training. This cycle of hand-labeling, training and automatic classification steps leads to a high-quality segmentation with little effort. Our experiments show promising results, raise new issues and provide some insight on possible improvements.