Introduction to the theory of neural computation
Introduction to the theory of neural computation
Exact calculation of the Hessian matrix for the multilayer perceptron
Neural Computation
Minimisation methods for training feedforward neural networks
Neural Networks
Deterministic global optimal FNN training algorithms
Neural Networks
Fast exact multiplication by the Hessian
Neural Computation
Face recognition: the problem of compensating for changes in illumination direction
ECCV '94 Proceedings of the third European conference on Computer vision (vol. 1)
Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection
IEEE Transactions on Pattern Analysis and Machine Intelligence
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
Global Energy Minimization: A Transformation Approach
EMMCVPR '01 Proceedings of the Third International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
Journal of Cognitive Neuroscience
Face recognition: a convolutional neural-network approach
IEEE Transactions on Neural Networks
Global Energy Minimization: A Transformation Approach
EMMCVPR '01 Proceedings of the Third International Workshop on Energy Minimization Methods in Computer Vision and Pattern Recognition
Hi-index | 0.00 |
This paper addresses the issues of global optimality and training of a Feedforward Neural Network (FNN) error funtion incorporating the weight decay regularizer. A network with a single hidden-layer and a single output-unit is considered. Explicit vector and matrix canonical forms for the Jacobian and Hessian of the network are presented. Convexity analysis is then performed utilizing the known canonical structure of the Hessian. Next, global optimality characterization of the FNN error function is attempted utilizing the results of convex characterization and a convex monotonic transformation. Based on this global optimality characterization, an iterative algorithm is proposed for global FNN learning. Numerical experiments with benchmark examples show better convergence of our network learning as compared to many existing methods in the literature. The network is also shown to generalize well for a face recognition problem.