Supervised and unsupervised co-training of adaptive activation functions in neural nets

  • Authors:
  • Ilaria Castelli;Edmondo Trentin

  • Affiliations:
  • Dipartimento di Ingegneria dell'Informazione, Università di Siena, Siena, Italy;Dipartimento di Ingegneria dell'Informazione, Università di Siena, Siena, Italy

  • Venue:
  • PSL'11 Proceedings of the First IAPR TC3 conference on Partially Supervised Learning
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In spite of the nice theoretical properties of mixtures of logistic activation functions, standard feedforward neural network with limited resources and gradient-descent optimization of the connection weights may practically fail in several, difficult learning tasks. Such tasks would be better faced by relying on a more appropriate, problem-specific basis of activation functions. The paper introduces a connectionist model which features adaptive activation functions. Each hidden unit in the network is associated with a specific pair (f(·), p(·)), where f(·) (the very activation) is modeled via a specialized neural network, and p(·) is a probabilistic measure of the likelihood of the unit itself being relevant to the computation of the output over the current input. While f(·) is optimized in a supervised manner (through a novel backpropagation scheme of the target outputs which do not suffer from the traditional phenomenon of "vanishing gradient" that occurs in standard backpropagation), p(·) is realized via a statistical parametric model learned through unsupervised estimation. The overall machine is implicitly a co-trained coupled model, where the topology chosen for learning each f(·) may vary on a unit-by-unit basis, resulting in a highly non-standard neural architecture.