Constrained Learning in Neural Networks: Application to Stable Factorization of 2-D Polynomials
Neural Processing Letters
Stochastic Neural Computation II: Soft Competitive Learning
IEEE Transactions on Computers
A novel feature extraction method and hybrid tree classification for handwritten numeral recognition
Pattern Recognition Letters
Proceedings of the Second European Workshop on Genetic Programming
Learning Capability: Classical RBF Network vs. SVM with Gaussian Kernel
IEA/AIE '02 Proceedings of the 15th international conference on Industrial and engineering applications of artificial intelligence and expert systems: developments in applied artificial intelligence
Discovering efficient learning rules for feedforward neural networks using genetic programming
Recent advances in intelligent paradigms and applications
A neural root finder of polynomials based on root moments
Neural Computation
Information Sciences: an International Journal
Improved Learning Algorithms of SLFN for Approximating Periodic Function
ICIC '08 Proceedings of the 4th international conference on Intelligent Computing: Advanced Intelligent Computing Theories and Applications - with Aspects of Artificial Intelligence
Determining the number of real roots of polynomials through neural networks
Computers & Mathematics with Applications
ICIC'09 Proceedings of the Intelligent computing 5th international conference on Emerging intelligent computing technology and applications
ISNN'06 Proceedings of the Third international conference on Advances in Neural Networks - Volume Part I
Hi-index | 0.01 |
A novel algorithm is presented which supplements the training phase in feedforward networks with various forms of information about desired learning properties. This information is represented by conditions which must be satisfied in addition to the demand for minimization of the usual mean square error cost function. The purpose of these conditions is to improve convergence, learning speed, and generalization properties through prompt activation of the hidden units, optimal alignment of successive weight vector offsets, elimination of excessive hidden nodes, and regulation of the magnitude of search steps in the weight space. The algorithm is applied to several small- and large-scale binary benchmark training tasks, to test its convergence ability and learning speed, as well as to a large-scale OCR problem, to test its generalization capability. Its performance in terms of percentage of local minima, learning speed, and generalization ability is evaluated and found superior to the performance of the backpropagation algorithm and variants thereof taking especially into account the statistical significance of the results