A multiscale scheme for approximating the quantron's discriminating function
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Prior knowledge of the input–output problems often leads to supervised learning restrictions that can hamper the multi-layered perceptron’s (MLP) capacity to find an optimal solution. Restrictions such as fixing weights and modifying input variables may influence the potential convergence of the back-propagation algorithm. This paper will show mathematically how to handle such constraints in order to obtain a modified version of the traditional MLP capable of solving targeted problems. More specifically, it will be shown that fixing particular weights according to prior information as well as transforming incoming inputs can enable the user to limit the MLP search to a desired type of solution. The ensuing modifications pertaining to the learning algorithm will be established. Moreover, four supervised improvements will offer insight on how to control the convergence of the weights towards an optimal solution. Finally, applications involving packing and covering problems will be used to illustrate the potential and performance of this modified MLP.