Learning and decision-making in the framework of fuzzy lattices
New learning paradigms in soft computing
A Sigma-Pi-Sigma Neural Network (SPSNN)
Neural Processing Letters
An Adaptive Learning Algorithm Aimed at Improving RBF Network Generalization Ability
AI '02 Proceedings of the 15th Australian Joint Conference on Artificial Intelligence: Advances in Artificial Intelligence
Journal of Intelligent and Robotic Systems
2005 Special Issue: The loading problem for recursive neural networks
Neural Networks - Special issue on neural networks and kernel methods for structured domains
Research on the protein secondary prediction using a symmetric binding form of organization
AICI'11 Proceedings of the Third international conference on Artificial intelligence and computational intelligence - Volume Part I
Short-term wind speed forecasting based on a hybrid model
Applied Soft Computing
Hi-index | 0.00 |
Learning from examples plays a central role in artificial neural networks. The success of many learning schemes is not guaranteed, however, since algorithms like backpropagation may get stuck in local minima, thus providing suboptimal solutions. For feedforward networks, optimal learning can be achieved provided that certain conditions on the network and the learning environment are met. This principle is investigated for the case of networks using radial basis functions (RBF). It is assumed that the patterns of the learning environment are separable by hyperspheres. In that case, we prove that the attached cost function is local minima free with respect to all the weights. This provides us with some theoretical foundations for a massive application of RBF in pattern recognition