Closed-Loop Object Recognition Using Reinforcement Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Real Time Optimization by Extremum Seeking Control
Real Time Optimization by Extremum Seeking Control
Multiple View Geometry in Computer Vision
Multiple View Geometry in Computer Vision
Advances in Computational Stereo
IEEE Transactions on Pattern Analysis and Machine Intelligence
Digital Image Processing (3rd Edition)
Digital Image Processing (3rd Edition)
Vision-Based SLAM: Stereo and Monocular Approaches
International Journal of Computer Vision
Stereo Processing by Semiglobal Matching and Mutual Information
IEEE Transactions on Pattern Analysis and Machine Intelligence
Towards 3D Point cloud based object maps for household environments
Robotics and Autonomous Systems
SBA: A software package for generic sparse bundle adjustment
ACM Transactions on Mathematical Software (TOMS)
Adaptive object detection and recognition based on a feedback strategy
Image and Vision Computing
ROVIS: robust machine vision for service robotic system FRIEND
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
A tutorial on particle filters for online nonlinear/non-GaussianBayesian tracking
IEEE Transactions on Signal Processing
Guest Editorial Special Issue on Visual SLAM
IEEE Transactions on Robotics
Large-Scale 6-DOF SLAM With Stereo-in-Hand
IEEE Transactions on Robotics
Feedback control strategies for object recognition
IEEE Transactions on Image Processing
Acquisition of object pose from barcode for robot manipulation
SIMPAR'12 Proceedings of the Third international conference on Simulation, Modeling, and Programming for Autonomous Robots
Hi-index | 0.00 |
Successful path planning and object manipulation in service robotics applications rely both on a good estimation of the robot's position and orientation (pose) in the environment, as well as on a reliable understanding of the visualized scene. In this paper a robust real-time camera pose and a scene structure estimation system is proposed. First, the pose of the camera is estimated through the analysis of the so-called tracks. The tracks include key features from the imaged scene and geometric constraints which are used to solve the pose estimation problem. Second, based on the calculated pose of the camera, i.e. robot, the scene is analyzed via a robust depth segmentation and object classification approach. In order to reliably segment the object's depth, a feedback control technique at an image processing level has been used with the purpose of improving the robustness of the robotic vision system with respect to external influences, such as cluttered scenes and variable illumination conditions. The control strategy detailed in this paper is based on the traditional open-loop mathematical model of the depth estimation process. In order to control a robotic system, the obtained visual information is classified into objects of interest and obstacles. The proposed scene analysis architecture is evaluated through experimental results within a robotic collision avoidance system.