Multilayer feedforward networks are universal approximators
Neural Networks
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Original Contribution: Parity with two layer feedforward nets
Neural Networks
Approximation capability in C(R¯n) by multilayer feedforward networks and related problems
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Proceedings of the 2011 conference on Neural Nets WIRN10: Proceedings of the 20th Italian Workshop on Neural Nets
WIRN'05 Proceedings of the 16th Italian conference on Neural Nets
Hi-index | 0.00 |
The value of the output function gradient of a neural network, calculated in the training points, plays an essential role for its generalization capability. In this paper a feed forward neural architecture (αNet) that can learn the activation function of its hidden units during the training phase is presented. The automatic learning is obtained through the joint use of the Hermite regression formula and the CGD optimization algorithm with the Powell restart conditions. This technique leads to a smooth output function of αNet in the nearby of the training points, achieving an improvement of the generalization capability and the flexibility of the neural architecture. Experimental results, obtained comparing αNet with traditional architectures with sigmoidal or sinusoidal activation functions, show that the former is very flexible and has good approximation and classification capabilities.