Modified multi-layered perceptron applied to packing and covering problems

  • Authors:
  • Richard Labib;Reza Assadi

  • Affiliations:
  • École Polytechnique de Montréal, Department of Mathematics and Industrial Engineering, Montreal, Canada;École Polytechnique de Montréal, Department of Mathematics and Industrial Engineering, Montreal, Canada

  • Venue:
  • Neural Computing and Applications
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Prior knowledge of the input–output problems often leads to supervised learning restrictions that can hamper the multi-layered perceptron’s (MLP) capacity to find an optimal solution. Restrictions such as fixing weights and modifying input variables may influence the potential convergence of the back-propagation algorithm. This paper will show mathematically how to handle such constraints in order to obtain a modified version of the traditional MLP capable of solving targeted problems. More specifically, it will be shown that fixing particular weights according to prior information as well as transforming incoming inputs can enable the user to limit the MLP search to a desired type of solution. The ensuing modifications pertaining to the learning algorithm will be established. Moreover, four supervised improvements will offer insight on how to control the convergence of the weights towards an optimal solution. Finally, applications involving packing and covering problems will be used to illustrate the potential and performance of this modified MLP.