Adaptive signal processing
Parallel distributed processing: explorations in the microstructure of cognition, vol. 2: psychological and biological models
Model-based control of a robot manipulator
Model-based control of a robot manipulator
Connectionist learning for control: an overview
Neural networks for control
Learning to Predict by the Methods of Temporal Differences
Machine Learning
Hi-index | 0.00 |
Conventional robot control schemes are basically model-based methods.However, exact modeling of robot dynamics poses considerable problems andfaces various uncertainties in task execution. This paper proposes areinforcement learning control approach for overcoming such drawbacks. Anartificial neural network (ANN) serves as the learning structure, and anapplied stochastic real-valued (SRV) unit as the learning method. Initially,force tracking control of a two-link robot arm is simulated to verify thecontrol design. The simulation results confirm that even without informationrelated to the robot dynamic model and environment states, operation rulesfor simultaneous controlling force and velocity are achievable by repetitiveexploration. Hitherto, however, an acceptable performance has demanded manylearning iterations and the learning speed proved too slow for practicalapplications. The approach herein, therefore, improves the trackingperformance by combining a conventional controller with a reinforcementlearning strategy. Experimental results demonstrate improved trajectorytracking performance of a two-link direct-drive robot manipulator using theproposed method.