Neural Networks Retraining for Unsupervised Video Object Segmentation of Videoconference Sequences

  • Authors:
  • Klimis S. Ntalianis;Nikolaos D. Doulamis;Anastasios D. Doulamis;Stefanos D. Kollias

  • Affiliations:
  • -;-;-;-

  • Venue:
  • ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
  • Year:
  • 2002

Quantified Score

Hi-index 0.01

Visualization

Abstract

In this paper efficient performance generalization of neural network classifiers is accomplished, for unsupervised video object segmentation in videoconference/videophone sequences. Each time conditions change, a retraining phase is activated and the neural network classifier is adapted to the new environment. During retraining both the former and current knowledge are utilized so that good network generalization is achieved. The retraining algorithm results in the minimization of a convex function subject to linear constraints, leading to very fast network weight adaptation. Current knowledge is unsupervisedly extracted using a face-body detector, based on Gaussian p.d.f models. A binary template matching technique is also incorporated, which imposes shape constraints to candidate face regions. Finally the retrained network performs video object segmentation to the new environment. Several experiments on real sequences indicate the promising performance of the proposed adaptive neural network as efficient video object segmentation tool.