Neural networks trained with the EEM algorithm: tuning the smoothing parameter

  • Authors:
  • Jorge M. Santos;Joaquim Marques De Sá;Luís A. Alexandre

  • Affiliations:
  • Intituto de Engenharia Biomédica, Porto, Portugal and Instituto Superior de Engenharia do Porto, Porto, Portugal;Intituto de Engenharia Biomédica, Porto, Portugal;IT, Networks and Multimedia Group, Covilhã, Portugal

  • Venue:
  • NN'05 Proceedings of the 6th WSEAS international conference on Neural networks
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

The training of Neural Networks and particularly Multi-Layer Perceptrons (MLP's) is made by minimizing an error function usually known as "cost function". In our previous works, we apply the Error Entropy Minimization (EEM) algorithm in classification and its optimized version using, as cost function, the entropy of the errors between the outputs and the desired targets of the neural network. One of the difficulties in implementing the EEM algorithm is the choice of the smoothing parameter, also known as window size, in the Parzen Window probability density function estimation for the computation of the entropy and its gradient. We present here a formula yielding the value of the smoothing parameter, depending on the number of data samples and on the neural network output dimension. Several experiments with real data sets were made in order to show the validity of the proposed formula.