Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

  • Authors:
  • P. C. Kainen;V. Kurkova;M. Sanguineti

  • Affiliations:
  • Dept. of Math. & Stat., Georgetown Univ., Washington, DC, USA;-;-

  • Venue:
  • IEEE Transactions on Information Theory
  • Year:
  • 2012

Quantified Score

Hi-index 754.84

Visualization

Abstract

The role of input dimension d is studied in approximating, in various norms, target sets of d-variable functions using linear combinations of adjustable computational units. Results from the literature, which emphasize the number n of terms in the linear combination, are reformulated, and in some cases improved, with particular attention to dependence on d . For worst-case error, upper bounds are given in the factorized form ξ(d)κ(n) , where κ is nonincreasing (typically κ(n) ~ n-1/2). Target sets of functions are described for which the function ξ is a polynomial. Some important cases are highlighted where ξ decreases to zero as d → ∞. For target functions, extent (e.g., the size of domains in Rd where they are defined), scale (e.g., maximum norms of target functions), and smoothness (e.g., the order of square-integrable partial derivatives) may depend on d , and the influence of such dimension-dependent parameters on model complexity is considered. Results are applied to approximation and solution of optimization problems by neural networks with perceptron and Gaussian radial computational units.