Machine learning of inductive bias
Machine learning of inductive bias
Pattern recognition: statistical, structural and neural approaches
Pattern recognition: statistical, structural and neural approaches
CVGIP: Image Understanding
Three-dimensional computer vision: a geometric viewpoint
Three-dimensional computer vision: a geometric viewpoint
Dynamic cell structure learns perfectly topology preserving map
Neural Computation
Visual learning and recognition of 3-D objects from appearance
International Journal of Computer Vision
Robot grasp synthesis algorithms: a survey
International Journal of Robotics Research
Active learning for vision-based robot grasping
Machine Learning - Special issue on robot learning
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Learning to Grasp Using Visual Information
Learning to Grasp Using Visual Information
Active learning using mean shift optimization for robot grasping
IROS'09 Proceedings of the 2009 IEEE/RSJ international conference on Intelligent robots and systems
Vision-based 3D object localization using probabilistic models of appearance
PR'05 Proceedings of the 27th DAGM conference on Pattern Recognition
Hi-index | 0.00 |
We apply techniques of computer vision and neural network learningto get a versatile robot manipulator. All work conducted follows theprinciple of autonomous learning from visual demonstration. The user mustdemonstra te the relevant objects, situations, and/or actions, and therobot vision system must learn from those. For approaching and graspingtechnical objects three principal tasks have to be done—calibratingthe camera-robot coordination, detecting the desired object in the images,and choosing a stable grasping pose. These procedures are based on(nonlinear) functions, which are not known a priori and therefore have to belearned. We uniformly approximate the necessary functions by networks ofgaussian basis functions (GBF networks). By modifying the number of basisfunctions and/or the size of the gaussian support the quality of thefunction approximation changes. The appropriate configuration is learned inthe training phase and applied during the operation phase. All experimentsare carried out in real world applications using an industrial articulationrobot manipulator and the computer vision system KHOROS.