Image-based Visual Servoing with Central Catadioptric Cameras
International Journal of Robotics Research
Switching visual control based on epipoles for mobile robots
Robotics and Autonomous Systems
Robotics and Autonomous Systems
Dynamic visual tracking control of a mobile robot with image noise and occlusion robustness
Image and Vision Computing
Parking with the essential matrix without short baseline degeneracies
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Visual homing for undulatory robotic locomotion
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Visual control through the trifocal tensor for nonholonomic robots
Robotics and Autonomous Systems
Homography-based control scheme for mobile robots with nonholonomic and field-of-view constraints
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics - Special issue on gait analysis
Vision-based exponential stabilization of mobile robots
Autonomous Robots
Hi-index | 0.00 |
A monocular camera-based vision system attached to a mobile robot (i.e., the camera-in-hand configuration) is considered in this paper. By comparing corresponding target points of an object from two different camera images, geometric relationships are exploited to derive a transformation that relates the actual position and orientation of the mobile robot to a reference position and orientation. This transformation is used to synthesize a rotation and translation error system from the current position and orientation to the fixed reference position and orientation. Lyapunov-based techniques are used to construct an adaptive estimate to compensate for a constant, unmeasurable depth parameter, and to prove asymptotic regulation of the mobile robot. The contribution of this paper is that Lyapunov techniques are exploited to craft an adaptive controller that enables mobile robot position and orientation regulation despite the lack of an object model and the lack of depth information. Experimental results are provided to illustrate the performance of the controller.