Structural identifiability of generalized constraint neural network models for nonlinear regression

  • Authors:
  • Shuang-Hong Yang;Bao-Gang Hu;Paul-Henry Cournède

  • Affiliations:
  • NLPR & LIAMA, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China;NLPR & LIAMA, Institute of Automation, Chinese Academy of Sciences, Beijing 100080, China;Laboratory of Applied Mathematics and Systems, ícole Centrale Paris 92295, France

  • Venue:
  • Neurocomputing
  • Year:
  • 2008

Quantified Score

Hi-index 0.01

Visualization

Abstract

Identifiability becomes an essential requirement for learning machines when the models contain physically interpretable parameters. This paper presents two approaches to examining structural identifiability of the generalized constraint neural network (GCNN) models by viewing the model from two different perspectives. First, by taking the model as a static deterministic function, a functional framework is established, which can recognize deficient model and at the same time reparameterize it through a pairwise-mode symbolic examination. Second, by viewing the model as the mean function of an isotropic Gaussian conditional distribution, the algebraic approaches [E.A. Catchpole, B.J.T. Morgan, Detecting parameter redundancy, Biometrika 84 (1) (1997) 187-196] are extended to deal with multivariate nonlinear regression models through symbolically checking linear dependence of the derivative functional vectors. Examples are presented in which the proposed approaches are applied to GCNN nonlinear regression models that contain coupling physically interpretable parameters.