Evolving Multilayer Perceptrons
Neural Processing Letters
A New Learning Algorithm Using Simultaneous Perturbation with Weight Initialization
Neural Processing Letters
Information Sciences: an International Journal
International Journal of Approximate Reasoning
Variations of the two-spiral task
Connection Science
Global versus local constructive function approximation for on-line reinforcement learning
AI'05 Proceedings of the 18th Australian Joint conference on Advances in Artificial Intelligence
Clustering algorithm based on wavelet neural network mobility prediction in mobile ad hoc network
ISNN'05 Proceedings of the Second international conference on Advances in Neural Networks - Volume Part III
Hi-index | 0.00 |
Cascade-correlation (Cascor) is a popular supervised learning architecture that dynamically grows layers of hidden neurons of fixed nonlinear activations (e.g., sigmoids), so that the network topology (size, depth) can be efficiently determined. Similar to a cascade-correlation learning network (CCLN), a projection pursuit learning network (PPLN) also dynamically grows the hidden neurons. Unlike a CCLN where cascaded connections from the existing hidden units to the new candidate hidden unit are required to establish high-order nonlinearity in approximating the residual error, a PPLN approximates the high-order nonlinearity by using trainable parametric or semi-parametric nonlinear smooth activations based on minimum mean squared error criterion. An analysis is provided to show that the maximum correlation training criterion used in a CCLN tends to produce hidden units that saturate and thus makes it more suitable for classification tasks instead of regression tasks as evidenced in the simulation results. It is also observed that this critical weakness in CCLN can also potentially carry over to classification tasks, such as the two-spiral benchmark used in the original CCLN paper