International Journal of Robotics Research
Divergent stereo in autonomous navigation: from bees to robots
International Journal of Computer Vision - Special issue on qualitative vision
Achieving dextrous grasping by integrating planning and vision-based sensing
International Journal of Robotics Research - Special issue on integration among planning, sensing, and control
A Lie group formulation of robot dynamics
International Journal of Robotics Research
Image divergence and deformation from closed curves
International Journal of Robotics Research
A Mathematical Introduction to Robotic Manipulation
A Mathematical Introduction to Robotic Manipulation
Contour Tracking by Stochastic Propagation of Conditional Density
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Robotic Control with Partial Visual Information
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Visual Homing: Surfing on the Epipoles
ICCV '98 Proceedings of the Sixth International Conference on Computer Vision
Image-based robot task planning and control using a compact visual representation
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Real-Time Visual Tracking of Complex Structures
IEEE Transactions on Pattern Analysis and Machine Intelligence
Layered Motion Segmentation and Depth Ordering by Tracking Edges
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Mathematical Imaging and Vision
Lie algebra approach for tracking and 3D motion estimation using monocular vision
Image and Vision Computing
A shape hierarchy for 3D modelling from video
Proceedings of the 5th international conference on Computer graphics and interactive techniques in Australia and Southeast Asia
Recovering epipolar direction from two affine views of a planar object
Computer Vision and Image Understanding
Occlusion Boundaries from Motion: Low-Level Detection and Mid-Level Reasoning
International Journal of Computer Vision
Edge landmarks in monocular SLAM
Image and Vision Computing
Monocular object pose computation with the foveal-peripheral camera of the humanoid robot Armar-III
Proceedings of the 2008 conference on Artificial Intelligence Research and Development: Proceedings of the 11th International Conference of the Catalan Association for Artificial Intelligence
Camera motion estimation by tracking contour deformation: Precision analysis
Image and Vision Computing
A stochastic approach to tracking objects across multiple cameras
AI'04 Proceedings of the 17th Australian joint conference on Advances in Artificial Intelligence
Advances in matrix manifolds for computer vision
Image and Vision Computing
Image and Vision Computing
Hi-index | 0.00 |
A novel approach to visual servoing is presented, which takes advantage of the structure of the Lie algebra of affine transformations. The aim of this project is to use feedback from a visual sensor to guide a robot arm to a target position. The target position is learned using the principle of ‘teaching by showing’ in which the supervisor places the robot in the correct target position and the system captures the necessary information to be able to return to that position. The sensor is placed in the end effector of the robot, the ‘camera-in-hand’ approach, and thus provides direct feedback of the robot motion relative to the target scene via observed transformations of the scene. These scene transformations are obtained by measuring the affine deformations of a target planar contour (under the weak perspective assumption), captured by use of an active contour, or snake. Deformations of the snake are constrained using the Lie groups of affine and projective transformations. Properties of the Lie algebra of affine transformations are exploited to provide a novel method for integrating observed deformations of the target contour. These can be compensated with appropriate robot motion using a non-linear control structure. The local differential representation of contour deformations is extended to allow accurate integration of an extended series of small perturbations. This differs from existing approaches by virtue of the properties of the Lie algebra representation which implicitly embeds knowledge of the three-dimensional world within a two-dimensional image-based system. These techniques have been implemented using a video camera to control a 5 DoF robot arm. Experiments with this implementation are presented, together with a discussion of the results.