Motion and Structure From Two Perspective Views: Algorithms, Error Analysis, and Error Estimation
IEEE Transactions on Pattern Analysis and Machine Intelligence
Robust recovery of the epipolar geometry for an uncalibrated stereo rig
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
Determining the Epipolar Geometry and its Uncertainty: A Review
International Journal of Computer Vision
Linear N-Point Camera Pose Determination
IEEE Transactions on Pattern Analysis and Machine Intelligence
International Journal of Computer Vision - Special issue on image-based servoing
Fast and Globally Convergent Pose Estimation from Video Images
IEEE Transactions on Pattern Analysis and Machine Intelligence
Efficient Linear Solution of Exterior Orientation
IEEE Transactions on Pattern Analysis and Machine Intelligence
The Geometry of Multiple Images: The Laws That Govern The Formation of Images of A Scene and Some of Their Applications
Linear Pose Estimation from Points or Lines
IEEE Transactions on Pattern Analysis and Machine Intelligence
Self-calibrated visual servoing with respect to axial-symmetric 3D objects
Robotics and Autonomous Systems
Visual servoing path planning via homogeneous forms and LMI optimizations
IEEE Transactions on Robotics
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
Parking with the essential matrix without short baseline degeneracies
ICRA'09 Proceedings of the 2009 IEEE international conference on Robotics and Automation
A kalman-filter-based method for pose estimation in visual servoing
IEEE Transactions on Robotics
Vision-based exponential stabilization of mobile robots
Autonomous Robots
Hi-index | 0.14 |
A simple technique for estimating the camera displacement from point correspondences in eye-in-hand visual servoing is presented. The idea for providing more accurate results than existing methods consists of taking into account that the point correspondences used during the camera motion correspond to stationary spatial points, hence exploiting additional information. This is done by first estimating the object Euclidean structure and then estimating the camera displacement from this estimate.