Efficient kernel models for learning and approximate minimization problems

  • Authors:
  • C. Cervellera;M. Gaggero;D. Macciò

  • Affiliations:
  • Institute of Intelligent Systems for Automation, National Research Council, Via De Marini 6, 16149 Genoa, Italy;Institute of Intelligent Systems for Automation, National Research Council, Via De Marini 6, 16149 Genoa, Italy;Institute of Intelligent Systems for Automation, National Research Council, Via De Marini 6, 16149 Genoa, Italy

  • Venue:
  • Neurocomputing
  • Year:
  • 2012

Quantified Score

Hi-index 0.01

Visualization

Abstract

This paper investigates techniques for reducing the computational burden of local learning methods relying on kernel functions in the framework of approximate minimization, i.e., when they are employed to find the minimum of a given cost functional. The considered approach is based on an optimal choice of the kernel width parameters through the minimization of an empirical cost and can provide a solution to important problems, such as function approximation and multistage optimization. However, when the stored data are too many, the kernel model output evaluation can take a long time, making local learning unsuited to contexts where a fast function evaluation is required. At the same time, the training procedure to obtain the kernel widths can become too demanding as well. Here it is shown that a large saving in the computational effort can be achieved by considering subsets of the available data suitably chosen according to different criteria. An analysis of the performance of the new approach is provided. Then, simulation results show in practice the effectiveness of the proposed techniques when applied to learning and approximate minimization problems.