A Constructive Approach to Calculating Lower Entropy Bounds

  • Authors:
  • Valeriu Beiu;Sorin Draghici;Thiery De Pauw

  • Affiliations:
  • Los Alamos National Laboratory, Division NIS–1, Mail Stop D466, Los Alamos, New Mexico 87545, U.S.A.;Vision and Neural Networks Laboratory, Department of Computer Science Wayne State University, 431 State Hall, Detroit, Michigan 48202, U.S.A.>/aff>;Université Catholique de Louvain, Départment de Mathématique, Chemin du Cyclotron, 2, B–1348 Louvain-la-Neuve, Belgium. E-mail: beiu@lanl.gov

  • Venue:
  • Neural Processing Letters
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper presents a constructive approach to estimating the sizeof a neural network necessary to solve a given classification problem. Theresults are derived using an information entropy approach in the context oflimited precision integer weights. Such weights are particularly suited forhardware implementations since the area they occupy is limited, and thecomputations performed with them can be efficiently implemented inhardware. The considerations presented use an information entropyperspective and calculate lower bounds on the number of bits needed inorder to solve a given classification problem. These bounds are obtained byapproximating the classification hypervolumes with the volumes of severalregular (i.e., highly symmetric) n-dimensional bodies. The bounds givenhere allow the user to choose the appropriate size of a neural network suchthat: (i) the given classification problem can be solved, and (ii) thenetwork architecture is not oversized. All considerations presented takeinto account the restrictive case of limited precision integer weights, andtherefore can be directly applied when designing VLSI implementations ofneural networks.