Multilayer Feedforward Neural Network Based on Multi-valued Neurons (MLMVN) and a Backpropagation Learning Algorithm

  • Authors:
  • Igor Aizenberg;Claudio Moraga

  • Affiliations:
  • Department of Computer and Information Sciences, Texas A&M University––Texarkana, P.O. Box 5518, 2600 N. Robison Rd., 75505, Texarkana, TX, USA;Department of Computer Science-1, University of Dortmund, P.O. Box 5518, 2600 N. Robison Rd., 44221, Dortmund, TX, Germany

  • Venue:
  • Soft Computing - A Fusion of Foundations, Methodologies and Applications
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

A multilayer neural network based on multi-valued neurons (MLMVN) is considered in the paper. A multi-valued neuron (MVN) is based on the principles of multiple-valued threshold logic over the field of the complex numbers. The most important properties of MVN are: the complex-valued weights, inputs and output coded by the kth roots of unity and the activation function, which maps the complex plane into the unit circle. MVN learning is reduced to the movement along the unit circle, it is based on a simple linear error correction rule and it does not require a derivative. It is shown that using a traditional architecture of multilayer feedforward neural network (MLF) and the high functionality of the MVN, it is possible to obtain a new powerful neural network. Its training does not require a derivative of the activation function and its functionality is higher than the functionality of MLF containing the same number of layers and neurons. These advantages of MLMVN are confirmed by testing using parity n, two spirals and “sonar” benchmarks and the Mackey–Glass time series prediction.