Synchronization of Images from Multiple Cameras to Reconstruct a Moving Human

  • Authors:
  • Carl Moore;Toby Duckworth;Rob Aspin;David Roberts

  • Affiliations:
  • -;-;-;-

  • Venue:
  • DS-RT '10 Proceedings of the 2010 IEEE/ACM 14th International Symposium on Distributed Simulation and Real Time Applications
  • Year:
  • 2010

Quantified Score

Hi-index 0.02

Visualization

Abstract

What level of synchronization is necessary between images from multiple cameras in order to realistically reconstruct a moving human in 3D? Live reconstruction of the human form, from cameras surrounding the subject, could bridge the gap between video conferencing and Immersive Collaborative Virtual Environments (ICVEs). Video conferencing faithfully reproduces what someone looks like whereas ICVE faithfully reproduces what they look at. While 3D video has been demonstrated in tele-immersion prototypes, the visual/temporal quality has been way below what has become acceptable in video conferencing. Managed synchronization of the acquisition stage is universally used today to ensure multiple images feeding the reconstruction algorithm were taken at the same time. However, this inevitably increases latency and jitter. We measure the temporal characteristics of the capture stage and the impact of inconsistency on the reconstruction algorithm this feeds. This gives us both input and output characteristics for synchronization. From this we determine whether frame synchronization of multiple camera video streams actually needs to be delivered for 3D reconstruction, and if not what level of temporal divergence is acceptable across the captured image frames.