Multi-sensor 3D volumetric reconstruction using CUDA

  • Authors:
  • Hadi Aliakbarpour;Luis Almeida;Paulo Menezes;Jorge Dias

  • Affiliations:
  • Institute of Systems and Robotics, Polo II, University of Coimbra, Coimbra, Portugal;Institute Polytechnic of Tomar, Tomar, Portugal;Institute of Systems and Robotics, Polo II, University of Coimbra, Coimbra, Portugal;Institute of Systems and Robotics, Polo II, University of Coimbra, Coimbra, Portugal

  • Venue:
  • 3D Research
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a full-body volumetric reconstruction of a person in a scene using a sensor network, where some of them can be mobile. The sensor network is comprised of couples of camera and inertial sensor (IS). Taking advantage of IS, the 3D reconstruction is performed using no planar ground assumption. Moreover, IS in each couple is used to define a virtual camera whose image plane is horizontal and aligned with the earth cardinal directions. The IS is furthermore used to define a set of inertial planes in the scene. The image plane of each virtual camera is projected onto this set of parallel-horizontal inertial-planes, using some adapted homography functions. A parallel processing architecture is proposed in order to perform human real-time volumetric reconstruction. The real-time characteristic is obtained by implementing the reconstruction algorithm on a graphics processing unit (GPU) using Compute Unified Device Architecture (CUDA). In order to show the effectiveness of the proposed algorithm, a variety of the gestures of a person acting in the scene is reconstructed and demonstrated. Some analyses have been carried out to measure the performance of the algorithm in terms of processing time. The proposed framework has potential to be used by different applications such as smart-room, human behavior analysis and 3D teleconference.