Maximally equidistributed combined Tausworthe generators
Mathematics of Computation
Neural representation of probabilistic information
Neural Computation
OpenVIDIA: parallel GPU computer vision
Proceedings of the 13th annual ACM international conference on Multimedia
Bayesian processing of vestibular information
Biological Cybernetics
Active Exploration Using Bayesian Models for Multimodal Perception
ICIAR '08 Proceedings of the 5th international conference on Image Analysis and Recognition
Neural Network Implementation Using CUDA and OpenMP
DICTA '08 Proceedings of the 2008 Digital Image Computing: Techniques and Applications
Probabilistic Reasoning and Decision Making in Sensory-Motor Systems
Probabilistic Reasoning and Decision Making in Sensory-Motor Systems
Compute Unified Device Architecture Application Suitability
Computing in Science and Engineering
ROBIO '09 Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics
Implementation and calibration of a Bayesian binaural system for 3D localisation
ROBIO '09 Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics
Parallelization of particle filter algorithms
ISCA'10 Proceedings of the 2010 international conference on Computer Architecture
A hierarchical Bayesian framework for multimodal active perception
Adaptive Behavior - Animals, Animats, Software Agents, Robots, Adaptive Systems
Towards adaptive GPU resource management for embedded real-time systems
ACM SIGBED Review
Hi-index | 0.00 |
In this text we present the real-time implementation of a Bayesian framework for robotic multisensory perception on a graphics processing unit (GPU) using the Compute Unified Device Architecture (CUDA). As an additional objective, we intend to show the benefits of parallel computing for similar problems (i.e. probabilistic grid-based frameworks), and the user-friendly nature of CUDA as a programming tool. Inspired by the study of biological systems, several Bayesian inference algorithms for artificial perception have been proposed. Their high computational cost has been a prohibitory factor for real-time implementations. However in some cases the bottleneck is in the large data structures involved, rather than the Bayesian inference per se. We will demonstrate that the SIMD (single-instruction, multiple-data) features of GPUs provide a means for taking a complicated framework of relatively simple and highly parallelisable algorithms operating on large data structures, which might take up to several minutes of execution with a regular CPU implementation, and arrive at an implementation that executes in the order of tenths of a second. The implemented multimodal perception module (including stereovision, binaural sensing and inertial sensing) builds an egocentric representation of occupancy and local motion, the Bayesian Volumetric Map (BVM), based on which gaze shift decisions are made to perform active exploration and reduce the entropy of the BVM. Experimental results show that the real-time implementation successfully drives the robotic system to explore areas of the environment mapped with high uncertainty.