On the Generalization of the m-Class RDP Neural Network

  • Authors:
  • David A. Elizondo;Juan M. Ortiz-De-Lazcano-Lobato;Ralph Birkenhead

  • Affiliations:
  • School of Computing, De Montfort University, Leicester, United Kingdom LE1 9BH;School of Computing, University of Málaga, Málaga, Spain;School of Computing, De Montfort University, Leicester, United Kingdom LE1 9BH

  • Venue:
  • ICANN '08 Proceedings of the 18th international conference on Artificial Neural Networks, Part II
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Recursive Deterministic Perceptron (RDP) feed-forward multilayer neural network is a generalisation of the single layer perceptron topology. This model is capable of solving any two-class classification problem as opposed to the single layer perceptron which can only solve classification problems dealing with linearly separable sets. For all classification problems, the construction of an RDP is done automatically and convergence is always guaranteed. A generalization of the 2-class Recursive Deterministic Perceptron (RDP) exists. This generalization allows to always separate, in a deterministic way, m-classes. It is based on a new notion of linear separability and it falls naturally from the 2 valued RDP. The methods for building 2-class RDP neural networks have been extensively tested. However, no testing has been done before on the m-class RDP method. For the first time, a study on the performance of the m-class method is presented. This study will allow the highlighting of the main advantages and disadvantages of this method by comparing the results obtained while building m-class RDP neural networks with other more classical methods such as Backpropagation and Cascade Correlation. The networks were trained and tested using the following standard benchmark classification datasets: IRIS, SOYBEAN, and Wisconsin Breast Cancer.