The “weight smoothing” regularization of MLP for Jacobian stabilization

  • Authors:
  • F. Aires;M. Schmitt;A. Chedin;N. Scott

  • Affiliations:
  • CNRS, Ecole Polytech., Palaiseau;-;-;-

  • Venue:
  • IEEE Transactions on Neural Networks
  • Year:
  • 1999

Quantified Score

Hi-index 0.00

Visualization

Abstract

In an approximation problem with a neural network, a low-output root mean square error is not always a universal criterion. We investigate problems where the Jacobians-first derivative of an output value with respect to an input value-of the approximation model are needed and propose to add a quality criterion on these Jacobians during the learning step. More specifically, we focus on the approximation of functionals 𝒜, from a space of continuous functions (discretized in practice) to a scalar space. In this case, the approximation is confronted with the compensation phenomenon: a lower contribution of one input can be compensated by a larger one of its neighboring inputs. In this case, profiles (with respect to the input index) of neural Jacobians are very irregular instead of smooth. Then, the approximation of 𝒜 becomes an ill-posed problem because many solutions can be chosen by the learning process. We propose to introduce the smoothness of Jacobian profiles as an a priori information via a regularization technique and develop a new and efficient learning algorithm, called “weight smoothing”. We assess the robustness of the weight smoothing algorithm by testing it on a real and complex problem stemming from meteorology: the neural approximation of the forward model of radiative transfer equation in the atmosphere. The stabilized Jacobians of this model are then used in an inversion process to illustrate the improvement of the Jacobians after weight smoothing