What size net gives valid generalization?
Advances in neural information processing systems 1
Skeletonization: a technique for trimming the fat from a network via relevance assessment
Advances in neural information processing systems 1
Advances in neural information processing systems 2
Neural networks and the bias/variance dilemma
Neural Computation
The nature of statistical learning theory
The nature of statistical learning theory
Machine Learning
A dynamic recurrent neural-network-based adaptive observer for a class of nonlinear systems
Automatica (Journal of IFAC)
Trading convexity for scalability
ICML '06 Proceedings of the 23rd international conference on Machine learning
A multi-objective approach to RBF network learning
Neurocomputing
Neural network adaptive robust control of nonlinear systems in semi-strict feedback form
Automatica (Journal of IFAC)
Hi-index | 0.00 |
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.