Modelling and accuracy estimation of a new omnidirectional depth computation sensor

  • Authors:
  • Radu Orghidan;Joaquim Salvi;El Mustapha Mouaddib

  • Affiliations:
  • Institute of Informatics and Applications, Computer Vision and Robotics Group, University of Girona, Edifici P-IV, Campus Montilivi, 17071 Girona, Spain;Institute of Informatics and Applications, Computer Vision and Robotics Group, University of Girona, Edifici P-IV, Campus Montilivi, 17071 Girona, Spain;Centre of Robotics, Electrotechnics and Automation, University of Picardie Jules Verne, Amiens, France

  • Venue:
  • Pattern Recognition Letters
  • Year:
  • 2006

Quantified Score

Hi-index 0.10

Visualization

Abstract

Depth computation is an attractive feature in computer vision. The use of traditional perspective cameras for panoramic perception requires several images, most likely implying the use of several cameras or of a sensor with mobile elements. Moreover, misalignments can appear for non-static scenes. Omnidirectional cameras offer a much wider field of view (FOV) than perspective cameras, capture a panoramic image at every moment and alleviate problems due to occlusions. A practical way to obtain depth in computer vision is the use of structured light systems. This paper is focused on combining omnidirectional vision and structured light with the aim of obtaining panoramic depth information. The resulting sensor is formed by a single catadioptric camera and an omnidirectional light projector. The model and the prototype of a new omnidirectional depth computation sensor are presented in this article and its accuracy is estimated by means of laboratory experimental setups.