Multilayer perceptrons as classifiers guided by mutual information and trained with genetic algorithms

  • Authors:
  • Antonio Neme;Sergio Hernández;Antonio Nido;Carlos Islas

  • Affiliations:
  • Universidad Autónoma de la Ciudad de México, México, D.F., México;Universidad Autónoma de la Ciudad de México, México, D.F., México;Universidad Autónoma de la Ciudad de México, México, D.F., México;Universidad Autónoma de la Ciudad de México, México, D.F., México

  • Venue:
  • IDEAL'12 Proceedings of the 13th international conference on Intelligent Data Engineering and Automated Learning
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

Multilayer perceptrons can be trained with several algorithms and with different quantities that correlate the expected output and the achieved state. Among the most common of those quantities is the mean square error, but information-theoretic quantities have been applied with great success. A common scheme to train multilayer perceptrons is based in evolutionary computing, as a counterpart of the commonly applied backpropagation algorithm. In this contribution we evaluated the performance of multilayer perceptrons as classifiers when trained with genetic algorithms and applying mutual information between the label obtained by the network and the expected class. We propose a classification algorithm in which each input variable is substituted by a function of it such that mutual information from the new function to the label is maximized. Next, those approximated functions are fed as input to a multilayer perceptron in charge of learning the classification map, trained with genetic algorithms and guided by mutual information.