Using the Hermite Regression Formula to Design a Neural Architecture with Automatic Learning of the "Hidden" Activation Functions

  • Authors:
  • Salvatore Gaglio;G. Pilato;Filippo Sorbello;G. Vassallo

  • Affiliations:
  • -;-;-;-

  • Venue:
  • AI*IA '99 Proceedings of the 6th Congress of the Italian Association for Artificial Intelligence on Advances in Artificial Intelligence
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

The value of the output function gradient of a neural network, calculated in the training points, plays an essential role for its generalization capability. In this paper a feed forward neural architecture (αNet) that can learn the activation function of its hidden units during the training phase is presented. The automatic learning is obtained through the joint use of the Hermite regression formula and the CGD optimization algorithm with the Powell restart conditions. This technique leads to a smooth output function of αNet in the nearby of the training points, achieving an improvement of the generalization capability and the flexibility of the neural architecture. Experimental results, obtained comparing αNet with traditional architectures with sigmoidal or sinusoidal activation functions, show that the former is very flexible and has good approximation and classification capabilities.