Revisiting squared-error and cross-entropy functions for training neural network classifiers

  • Authors:
  • M. Kline;L. Berardi

  • Affiliations:
  • Cameron School of Business, University of North Carolina, Wilmington, NC, USA;Graduate School of Management, Kent State University, 44221, Kent, OH, USA

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper investigates the efficacy of cross-entropy and square-error objective functions used in training feed-forward neural networks to estimate posterior probabilities. Previous research has found no appreciable difference between neural network classifiers trained using cross-entropy or squared-error. The approach employed here, though, shows cross-entropy has significant, practical advantages over squared-error.