Compositing for small cameras

  • Authors:
  • Georg Klein;David Murray

  • Affiliations:
  • Active Vision Laboratory, Department of Engineering Science, University of Oxford, UK;Active Vision Laboratory, Department of Engineering Science, University of Oxford, UK

  • Venue:
  • ISMAR '08 Proceedings of the 7th IEEE/ACM International Symposium on Mixed and Augmented Reality
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

To achieve a realistic integration of virtual and real imagery in video see-through augmented reality, the rendered images should have a similar appearance and quality to those captured by the video camera. This paper describes a compositing method which models the artefacts produced by a small low-cost camera, and adds these effects to an ideal pinhole image produced by conventional rendering methods. We attempt to model and simulate each step of the imaging process, including distortions, chromatic aberrations, blur, bayer masking, noise and colour-space compression, all while requiring only an RGBA image and an estimate of camera velocity as inputs.