XMLP: a Feed-Forward Neural Network with Two-Dimensional Layers and Partial Connectivity

  • Authors:
  • Antonio Canas;Eva M. Ortigosa;Antonio F. Díaz;Julio Ortega

  • Affiliations:
  • Dept. of Computer Architecture and Technology, ETS Ingeniería Informítica, University of Granada, Granada, Spain E-18071;Dept. of Computer Architecture and Technology, ETS Ingeniería Informítica, University of Granada, Granada, Spain E-18071;Dept. of Computer Architecture and Technology, ETS Ingeniería Informítica, University of Granada, Granada, Spain E-18071;Dept. of Computer Architecture and Technology, ETS Ingeniería Informítica, University of Granada, Granada, Spain E-18071

  • Venue:
  • IWANN '03 Proceedings of the 7th International Work-Conference on Artificial and Natural Neural Networks: Part II: Artificial Neural Nets Problem Solving Methods
  • Year:
  • 2009

Quantified Score

Hi-index 0.00

Visualization

Abstract

This work presents an MLP-like feed-forward network with two-dimensional layers partially connected and other features, such as configurable activation functions and batched backpropagation with different smoothing-momentum alternatives. We name this model eXtended Multi-Layer Perceptron (XMLP) because it extends the connectivity of the MLP. Here we describe its architecture, the various activation functions that it can use, its learning algorithm, and the possible use of discretization intended for hardware implementation. We also show a configurable graphic tool developed to train and simulate any MLP-like network (totally or partially connected). Finally, we present some results on speech recognition in order to compare total and partial connectivity, as well as continuous and discrete operation.