2011 Special Issue: Can dictionary-based computational models outperform the best linear ones?

  • Authors:
  • Giorgio Gnecco;Vra Krková;Marcello Sanguineti

  • Affiliations:
  • Department of Communications, Computer, and System Sciences (DIST), University of Genoa, Via Opera Pia 13, 16145 Genova, Italy;Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod Vodárenskou ví2, 182 07, Prague 8, Czech Republic;Department of Communications, Computer, and System Sciences (DIST), University of Genoa, Via Opera Pia 13, 16145 Genova, Italy

  • Venue:
  • Neural Networks
  • Year:
  • 2011

Quantified Score

Hi-index 0.00

Visualization

Abstract

Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n-tuples of basis functions computable by units belonging to a set called ''dictionary'') and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.