Approximating pseudo-Boolean functions on non-uniform domains

  • Authors:
  • R. F. Lax;Guoli Ding;Peter P. Chen;J. Chen

  • Affiliations:
  • Department of Mathematics, LSU, Baton Rouge, LA;Department of Mathematics, LSU, Baton Rouge, LA;Department of Computer Science, LSU, Baton Rouge, LA;Department of Computer Science, LSU, Baton Rouge, LA

  • Venue:
  • IJCAI'05 Proceedings of the 19th international joint conference on Artificial intelligence
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

In Machine Learning (ML) and Evolutionary Computation (EC), it is often beneficial to approximate a complicated function by a simpler one, such as a linear or quadratic function, for computational efficiency or feasibility reasons (cf. [Jin, 2005]). A complicated function (the target function in ML or the fitness function in EC) may require an exponential amount of computation to learn/evaluate, and thus approximations by simpler functions are needed. We consider the problem of approximating pseudo-Boolean functions by simpler (e.g., linear) functions when the instance space is associated with a probability distribution. We consider {0, 1}n as a sample space with a (possibly nonuniform) probability measure on it, thus making pseudo-Boolean functions into random variables. This is also in the spirit of the PAC learning framework of Valiant [Valiant, 1984] where the instance space has a probability distribution on it. The best approximation to a target function f is then defined as the function g (from all possible approximating functions of the simpler form) that minimizes the expected distance to f. In an example, we use methods from linear algebra to find, in this more general setting, the best approximation to a given pseudo-Boolean function by a linear function.