Neural networks and the bias/variance dilemma
Neural Computation
Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
Feature Selection for Knowledge Discovery and Data Mining
Feature Selection for Knowledge Discovery and Data Mining
Machine Learning
Predictive models for the breeder genetic algorithm i. continuous parameter optimization
Evolutionary Computation
IEEE Transactions on Neural Networks
Objective functions for training new hidden units in constructive neural networks
IEEE Transactions on Neural Networks
Exploring constructive cascade networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
The selection of the frequencies of the new hidden units for sequential Feed-forward Neural Networks (FNNs) usually involves a non-linear optimization problem that cannot be solved analytically. Most models found in the literature choose the new frequency so that it matches the previous residue as best as possible. Several exceptions to the idea of matching the residue perform an (implicit or explicit) orthogonalization of the output vectors of the hidden units. An experimental study of the aforementioned approaches to select the frequencies in sequential FNNs is presented. Our experimental results indicate that the orthogonalization of the hidden vectors outperforms the strategy of matching the residue, both for approximation and generalization purposes.