Convergence analysis of sliding mode trajectories in multi-objective neural networks learning

  • Authors:
  • Marcelo Azevedo Costa;Antonio Padua Braga;Benjamin Rodrigues De Menezes

  • Affiliations:
  • Department of Statistics, Universidade Federal de Minas Gerais, Belo Horizonte, MG 31270-901, Brazil;Departament of Electronics Engineering, Universidade Federal de Minas Gerais, Brazil;Departament of Electronics Engineering, Universidade Federal de Minas Gerais, Brazil

  • Venue:
  • Neural Networks
  • Year:
  • 2012

Quantified Score

Hi-index 0.00

Visualization

Abstract

The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.