Model-guided luminance range enhancement in mixed reality

  • Authors:
  • Yunjun Zhang;Charles E. Hughes

  • Affiliations:
  • Computer Science Department, University of Central Florida;Computer Science Department, University of Central Florida

  • Venue:
  • ICIAR'07 Proceedings of the 4th international conference on Image Analysis and Recognition
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Mixed Reality (MR) applications tend to focus on the accuracy of registration between the virtual and real objects of a scene, while paying relatively little attention to the representation of the luminance range in the merged video output. In this paper, we propose a means to partially address this deficiency by introducing Enhanced Dynamic Range Video, a technique based on differing brightness settings for each eye of a video see-through head mounted display (HMD). First, we construct a Video-Driven Time-Stamped Ball Cloud (VDTSBC), which serves as a guideline and a means to store temporal color information for stereo image registration. Second, with the assistance of the VDTSBC, we register each pair of stereo images, taking into account confounding issues of occlusion occurring within one eye but not the other. Finally, we apply luminance enhancement on the registered image pairs to generate an Enhanced Dynamic Range Video.