Application of SONQL for real-time learning of robot behaviors
Robotics and Autonomous Systems
ISCGAV'05 Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision
IJCNN'09 Proceedings of the 2009 international joint conference on Neural Networks
AIBO goes to school: basic math
CIMMACS'05 Proceedings of the 4th WSEAS international conference on Computational intelligence, man-machine systems and cybernetics
Reinforcement learning using a grid based function approximator
Biomimetic Neural Learning for Intelligent Robots
Hi-index | 0.00 |
Interference in neural networks occurs when learning in one area of the input space causes unlearning in another area. Networks that are less susceptible to interference are referred to as spatially local networks. To obtain a better understanding of these properties, a theoretical framework, consisting of a measure of interference and a measure of network localization, is developed. These measures incorporate not only the network weights and architecture but also the learning algorithm. Using this framework to analyze sigmoidal, multilayer perceptron (MLP) networks that employ the backpropagation learning algorithm on the quadratic cost function, we address a familiar misconception that single-hidden-layer sigmoidal networks are inherently nonlocal by demonstrating that given a sufficiently large number of adjustable weights, single-hidden-layer sigmoidal MLPs exist that are arbitrarily local and retain the ability to approximate any continuous function on a compact domain