Theoretical Properties of Projection Based Multilayer Perceptrons with Functional Inputs

  • Authors:
  • Fabrice Rossi;Brieuc Conan-Guez

  • Affiliations:
  • Projet AxIS, INRIA, Domaine de Voluceau, Rocquencourt, Le Chesnay Cedex, France 78153;LITA EA3097, Université de Metz, Metz, France F-57045

  • Venue:
  • Neural Processing Letters
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

Many real world data are sampled functions. As shown by Functional Data Analysis (FDA) methods, spectra, time series, images, gesture recognition data, etc. can be processed more efficiently if their functional nature is taken into account during the data analysis process. This is done by extending standard data analysis methods so that they can apply to functional inputs. A general way to achieve this goal is to compute projections of the functional data onto a finite dimensional sub-space of the functional space. The coordinates of the data on a basis of this sub-space provide standard vector representations of the functions. The obtained vectors can be processed by any standard method.In [43], this general approach has been used to define projection based Multilayer Perceptrons (MLPs) with functional inputs. We study in this paper important theoretical properties of the proposed model. We show in particular that MLPs with functional inputs are universal approximators: they can approximate to arbitrary accuracy any continuous mapping from a compact sub-space of a functional space to$$\mathbb{R}$$. Moreover, we provide a consistency result that shows that any mapping from a functional space to$$\mathbb{R}$$ can be learned thanks to examples by a projection based MLP: the generalization mean square error of the MLP decreases to the smallest possible mean square error on the data when the number of examples goes to infinity.