Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Constructive feedforward neural networks using Hermite polynomial activation functions
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Some enhancements and comments to approximation of 2D functions in orthogonal basis are presented. This is a direct extension of the results obtained in [2]. First of all we prove that a constant bias extracted from the function contributes to the error decrease. We demonstrate how to choose that bias prooving an appropriate theorem. Secondly we discuss how to select a 2D basis among orthonormal functions to achieve minimum error for a fixed dimension of an approximation space. Thirdly we prove that loss of orthonormality due to truncation of the arguments range of the basis functions does not effect the overall error of approximation and the formula for calculating of the expansion coefficients remains the same. As an illustrative example, we show how these enhanencements can be used.