An optimality principle for unsupervised learning
Advances in neural information processing systems 1
EigenTracking: Robust Matching and Tracking of Articulated Objects Using a View-Based Representation
ECCV '96 Proceedings of the 4th European Conference on Computer Vision-Volume I - Volume I
Reinforcement learning: a survey
Journal of Artificial Intelligence Research
Learning visually guided grasping: a test case in sensorimotor learning
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Hi-index | 0.00 |
We describe a general learning approach to fine-positioning of a robot gripper in three-dimensional workspace using visual sensor data. This is a two-step approach: a) a hybrid representation for encoding the robot state perceived by visual sensors; b) partitioning the action space of the robot to let multiple specialized controllers evolve.The input encoding consists of representing position by geometric features and uniquely describing orientation by combination of principal components. Such a dimension-reduction procedure is essential to apply supervised as well as reinforcement learning. A fuzzy controller based on B-spline models serves as a function approximator using this encoded input and producing the motion parameters as outputs.A complex positioning and pose control task is divided into consecutive sub-tasks. Each subtask is solved by a specialized self-learning controller. The approach has been successfully applied to control 6-axes-robots translating the gripper in the three-dimensional workspace and rotating it about the z-axis. Instead of undergoing cumbersome hand-eye calibration processes, our system lets the controllers evolve using systematic perturbation motion around the desired position and orientation.