On-line novel view synthesis capable of handling multiple moving objects

  • Authors:
  • Indra Geys;Luc Van Gool

  • Affiliations:
  • ESAT/PSI-VISICS, Katholieke Universiteit Leuven, Leuven, Belgium;ESAT/PSI-VISICS, Katholieke Universiteit Leuven, Leuven, Belgium

  • Venue:
  • ICCV'05 Proceedings of the 2005 international conference on Computer Vision in Human-Computer Interaction
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a new interactive teleconferencing system. It adds a ‘virtual' camera to the scene which can move freely in between multiple real cameras. The viewpoint can automatically be selected using basic cinematographic rules, based on the position and the actions of the instructor. This produces a clearer and more engaging view for the remote audience, without the need for a human editor. For the creation of the novel views generated by such a ‘virtual' camera, segmentation and depth calculations are required. The system is semi-automatic, in that the user is asked to indicate a few corresponding points or edges for generating an initial rough background model. Next to the static background and moving foreground also multiple independently moving objects are catered for. The initial foreground contour is tracked over time, using a new active contour. If a second object appears, the contour prediction allows to recognize this situation and to take appropriate measures. The 3D models are continuously validated based on a Birchfield dissimilarity measure. The foreground model is updated every frame, the background is refined if necessary. The current implementation can reach approx 4 fps on a single desktop.