IEEE Transactions on Computers - Special issue on artificial neural networks
On Learning to Recognize 3-D Objects from Examples
IEEE Transactions on Pattern Analysis and Machine Intelligence
In Defense of One-Vs-All Classification
The Journal of Machine Learning Research
Restricted gradient-descent algorithm for value-function approximation in reinforcement learning
Artificial Intelligence
Neural networks and antenna arrays
CSECS '10 Proceedings of the 9th WSEAS international conference on Circuits, systems, electronics, control & signal processing
Regular antenna array synthesis using neural network
TELE-INFO'11/MINO'11/SIP'11 Proceedings of the 10th WSEAS international conference on Telecommunications and informatics and microelectronics, nanoelectronics, optoelectronics, and WSEAS international conference on Signal processing
Adaptive NN control for a class of strict-feedback discrete-time nonlinear systems
Automatica (Journal of IFAC)
Hi-index | 0.00 |
Networks can be considered as approximation schemes. Multilayer networks of the backpropagation type can approximate arbitrarily well continuous functions (Cybenko, 1989; Funahashi, 1989; Stinchcombe and White, 1989). We prove that networks derived from regularization theory and including Radial Basis Function (Poggio and Girosi, 1989), have a similar property. From the point of view of approximation theory, however, the property of approximating continous functions arbitrarily well is not sufficient for characterizing good approximation schemes. More critical is the property of best approximation. The main result of this paper is that multilayer networks, of the type used in backpropagation, are not best approximation. For regularization networks (in particular Radial Basis Function networks) we prove existence and uniqueness of best approximation.