The control of hand equilibrium trajectories in multi-joint arm movements
Biological Cybernetics
Natural gradient works efficiently in learning
Neural Computation
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Robot Analysis and Control
Least-Squares Temporal Difference Learning
ICML '99 Proceedings of the Sixteenth International Conference on Machine Learning
SIAM Journal on Control and Optimization
Approximate Dynamic Programming: Solving the Curses of Dimensionality (Wiley Series in Probability and Statistics)
Neurocomputing
Efficient reinforcement learning using recursive least-squares methods
Journal of Artificial Intelligence Research
An RLS-based natural actor-critic algorithm for locomotion of a two-linked robot arm
CIS'05 Proceedings of the 2005 international conference on Computational Intelligence and Security - Volume Part I
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Comparison of Adaptive Critic-Based and Classical Wide-Area Controllers for Power Systems
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Adaptive Critic Learning Techniques for Engine Torque and Air–Fuel Ratio Control
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Direct Heuristic Dynamic Programming for Damping Oscillations in a Large Power System
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Improved Adaptive–Reinforcement Learning Control for Morphing Unmanned Air Vehicles
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics
Enhanced human-machine interface in braking
IEEE Transactions on Systems, Man, and Cybernetics, Part A: Systems and Humans
Adaptive impedance control for natural human-robot collaboration
Proceedings of the Workshop at SIGGRAPH Asia
Compliant skills acquisition and multi-optima policy search with EM-based reinforcement learning
Robotics and Autonomous Systems
Hi-index | 0.00 |
Compared with their robotic counterparts, humans excel at various tasks by using their ability to adaptively modulate arm impedance parameters. This ability allows us to successfully perform contact tasks even in uncertain environments. This paper considers a learning strategy of motor skill for robotic contact tasks based on a human motor control theory and machine learning schemes. Our robot learning method employs impedance control based on the equilibrium point control theory and reinforcement learning to determine the impedance parameters for contact tasks. A recursive least-square filter-based episodic natural actor-critic algorithm is used to find the optimal impedance parameters. The effectiveness of the proposed method was tested through dynamic simulations of various contact tasks. The simulation results demonstrated that the proposed method optimizes the performance of the contact tasks in uncertain conditions of the environment.