Connectionist robot motion planning: a neurally-inspired approach to visually-guided reaching
Connectionist robot motion planning: a neurally-inspired approach to visually-guided reaching
A menu of designs for reinforcement learning over time
Neural networks for control
Artificial minds
Industrial intelligent control: fundamentals and applications
Industrial intelligent control: fundamentals and applications
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
Neuro-fuzzy and soft computing: a computational approach to learning and machine intelligence
The handbook of brain theory and neural networks
Reinforcement learning in motor control
The handbook of brain theory and neural networks
SourceBook of Control Systems Engineering
SourceBook of Control Systems Engineering
Introduction to Reinforcement Learning
Introduction to Reinforcement Learning
Intelligent Systems: Architecture, Design, and Control
Intelligent Systems: Architecture, Design, and Control
Incorporating Perception-Based Information in Reinforcement Learning Using Computing with Words
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Bio-inspired Applications of Connectionism-Part II
Intelligent data analysis
Discriminative training of the scanning N-tuple classifier
IWANN'03 Proceedings of the Artificial and natural neural networks 7th international conference on Computational methods in neural modeling - Volume 1
Hi-index | 0.00 |
This chapter presents a robotic mechanism aimed at navigating in unconventional environments like rigid aerial lines - power, telephone, railroad - and reticulated structures - ladders, grills, bars, etc -. A novel method of obstacle avoidance for this mechanism is also introduced. The computation of collision-free trajectories generally requires the analytical description of the physical structure of the environment and the solution of the kinematic equations. For dynamic, uncertain environments with unknown obstacles, however, it is very hard to get real-time collision avoidance by means of analytical techniques. The main strength of the proposed method resides, precisely, in that it departs from the analytical approach, as it does not use formal descriptions of the location and shape of the obstacles, nor does it solve the kinematic equations of the mechanism. Instead, the method follows the perception-reason-action paradigm and is based on a reinforcement learning process guided by perceptual feedback, which can be considered as biologically inspired at the functional level. From this perspective, obstacle avoidance is modeled as a multi-objective optimization problem. The method, as shown in the chapter, can be straightforwardly applied to real-time collision avoidance for articulated mechanisms, including conventional manipulator arms.