Improved computation for Levenberg-Marquardt training
IEEE Transactions on Neural Networks
Neural network learning without backpropagation
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
This paper presents learning multilayer Potts perceptrons (MLPotts) for data driven function approximation. A Potts perceptron is composed of a receptive field and a $K$ -state transfer function that is generalized from sigmoid-like transfer functions of traditional perceptrons. An MLPotts network is organized to perform translation from a high-dimensional input to the sum of multiple postnonlinear projections, each with its own postnonlinearity realized by a weighted $K$-state transfer function. MLPotts networks span a function space that theoretically covers network functions of multilayer perceptrons. Compared with traditional perceptrons, weighted Potts perceptrons realize more flexible postnonlinear functions for nonlinear mappings. Numerical simulations show MLPotts learning by the Levenberg–Marquardt (LM) method significantly improves traditional supervised learning of multilayer perceptrons for data driven function approximation.