Parallel distributed processing: explorations in the microstructure of cognition, vol. 1: foundations
Neuro-Dynamic Programming
Deterministic convergence of an online gradient method for BP neural networks
IEEE Transactions on Neural Networks
A framework for improved training of Sigma-Pi networks
IEEE Transactions on Neural Networks
Hi-index | 0.00 |
Sigma-Pi (Σ-驴) neural networks (SPNNs) are known to provide more powerful mapping capability than traditional feed-forward neural networks. A unified convergence analysis for the batch gradient algorithm for SPNN learning is presented, covering three classes of SPNNs: Σ-驴-Σ, Σ-Σ-驴 and Σ-驴-Σ-驴. The monotonicity of the error function in the iteration is also guaranteed.