Approximation and Estimation Bounds for Artificial Neural Networks
Machine Learning - Special issue on computational learning theory
Rates of convergence for partitioning and nearest neighbor regression estimates with unbounded data
Journal of Multivariate Analysis
An algorithm for the estimation of a regression function by continuous piecewise linear functions
Computational Optimization and Applications
Efficient agnostic learning of neural networks with bounded fan-in
IEEE Transactions on Information Theory - Part 2
On the hinge-finding algorithm for hingeing hyperplanes
IEEE Transactions on Information Theory
An L2-boosting algorithm for estimation of a regression function
IEEE Transactions on Information Theory
Approximation and estimation bounds for free knot splines
Computers & Mathematics with Applications
Hi-index | 754.90 |
In this paper, estimation of a regression function from independent and identically distributed random variables is considered. Estimates are defined by minimization of the empirical L2 risk over a class of functions, which are defined as maxima of minima of linear functions. Results concerning the rate of convergence of the estimates are derived. In particular, it is shown that for smooth regression functions satisfying the assumption of single index models, the estimate is able to achieve (up to some logarithmic factor) the corresponding optimal one-dimensional rate of convergence. Hence, under these assumptions, the estimate is able to circumvent the so-called curse of dimensionality. The small sample behavior of the estimates is illustrated by applying them to simulated data.