Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
Simultaneous non-parametric regressions of unbalanced longitudinal data
Computational Statistics & Data Analysis
Multi-layer Perceptrons for Functional Data Analysis: A Projection Based Approach
ICANN '02 Proceedings of the International Conference on Artificial Neural Networks
Representation of functional data in neural networks
Neurocomputing
Functional classification in Hilbert spaces
IEEE Transactions on Information Theory
Nonparametric estimation via empirical risk minimization
IEEE Transactions on Information Theory
IEEE Transactions on Neural Networks
IEEE Transactions on Neural Networks
Functional classification of ornamental stone using machine learning techniques
Journal of Computational and Applied Mathematics
Consistency of functional learning methods based on derivatives
Pattern Recognition Letters
A functional approach to variable selection in spectrometric problems
ICANN'06 Proceedings of the 16th international conference on Artificial Neural Networks - Volume Part I
Hi-index | 0.00 |
Many real world data are sampled functions. As shown by Functional Data Analysis (FDA) methods, spectra, time series, images, gesture recognition data, etc. can be processed more efficiently if their functional nature is taken into account during the data analysis process. This is done by extending standard data analysis methods so that they can apply to functional inputs. A general way to achieve this goal is to compute projections of the functional data onto a finite dimensional sub-space of the functional space. The coordinates of the data on a basis of this sub-space provide standard vector representations of the functions. The obtained vectors can be processed by any standard method.In [43], this general approach has been used to define projection based Multilayer Perceptrons (MLPs) with functional inputs. We study in this paper important theoretical properties of the proposed model. We show in particular that MLPs with functional inputs are universal approximators: they can approximate to arbitrary accuracy any continuous mapping from a compact sub-space of a functional space to$$\mathbb{R}$$. Moreover, we provide a consistency result that shows that any mapping from a functional space to$$\mathbb{R}$$ can be learned thanks to examples by a projection based MLP: the generalization mean square error of the MLP decreases to the smallest possible mean square error on the data when the number of examples goes to infinity.