What size net gives valid generalization?
Neural Computation
The cascade-correlation learning architecture
Advances in neural information processing systems 2
Generalization and parameter estimation in feedforward nets: some experiments
Advances in neural information processing systems 2
Perceptron-based learning algorithms
IEEE Transactions on Neural Networks
Neural Networks
Hi-index | 0.00 |
A novel objective function is proposed for optimizing the hidden unit function in feedforward neural networks. This objective function represents the performance of the hidden unit at minimizing the least squared output errors of the linear output unit. This is derived from the decrease in the output errors due to the addition of the hidden units. The optimized output state vectors of the hidden units span a proper state space, which includes the desired output vectors for the network. The optimization (maximization of the objective function) is equal to minimizing the angle between the desired output vector and the projection of the hidden unit's output state vector onto the orthogonal complement of the subspace spanned by the other state vectors. The approximate solution can be obtained using the gradient ascent algorithm. This optimization method is useful in constructing fully connected feedforward neural networks and for minimizing the size of layered networks.