Comparisons of single- and multiple-hidden-layer neural networks

  • Authors:
  • Takehiko Nakama

  • Affiliations:
  • European Center for Soft Computing, Edificio Científico Tecnológico, Mieres, Spain

  • Venue:
  • ISNN'11 Proceedings of the 8th international conference on Advances in neural networks - Volume Part I
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this study we conduct fair and systematic comparisons of two types of neural networks: single- and multiple-hidden-layer networks. For fair comparisons, we ensure that the two types use the same activation and output functions and have the same numbers of nodes, feedforward connections, and parameters. The networks are trained by the gradient descent algorithm to approximate linear and quadratic functions, and we examine their convergence properties. We show that, in both linear and quadratic cases, the learning rate is more flexible for networks with a single hidden layer than for those with multiple hidden layers. We also show that single-hidden-layer networks converge faster to linear target functions compared to multiple-hidden-layer networks.