A physically based approach to 2–D shape blending
SIGGRAPH '92 Proceedings of the 19th annual conference on Computer graphics and interactive techniques
Active learning for vision-based robot grasping
Machine Learning - Special issue on robot learning
SIGGRAPH '96 Proceedings of the 23rd annual conference on Computer graphics and interactive techniques
Robot Vision
Robust Vision for Vision-Based Control of Motion
Robust Vision for Vision-Based Control of Motion
Segmenting Handwritten Signatures at Their Perceptually Important Points
IEEE Transactions on Pattern Analysis and Machine Intelligence
Novel view synthesis in tensor space
CVPR '97 Proceedings of the 1997 Conference on Computer Vision and Pattern Recognition (CVPR '97)
Calibration-free visual control using projective invariance
ICCV '95 Proceedings of the Fifth International Conference on Computer Vision
Dynamic visual servo control of robots: an adaptive image-based approach
Dynamic visual servo control of robots: an adaptive image-based approach
Automatic grasp planning for visual-servo controlled roboticmanipulators
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Hi-index | 0.00 |
We present an approach for controlling robotic interactions with objects, using synthetic images generated by morphing shapes. In particular, we attempt the problem of positioning an eye-in-hand robotic system with respect to objects in the workspace for grasping and manipulation. In our formulation, the grasp position (and consequently the approach trajectory of the manipulator), varies with each object. The proposed solution to the problem consists of two parts. First, based on a model-based object recognition framework, images of the objects taken at the desired grasp pose are stored in a database. The recognition and identification of the grasp position for an unknown input object (selected from the family of recognizable objects) occurs by morphing its contour to the templates in the database and using the virtual energy spent during the morph as a dissimilarity measure. In the second step, the images synthesized during the morph are used to guide the eye-in-hand system and execute the grasp. The proposed method requires minimal calibration of the system. Furthermore, it conjoins techniques from shape recognition, computer graphics, and vision-based robot control in a unified engineering amework. Potential applications range from recognition and positioning with respect to partially-occluded or deformable objects to planning robotic grasping based on human demonstration.