Machine learning with genetic multivariate polynomials

  • Authors:
  • Angel Fernando Kuri-Morales

  • Affiliations:
  • Departamento de Computación, Instituto Tecnológico Autónomo de México, México

  • Venue:
  • AIKED'06 Proceedings of the 5th WSEAS International Conference on Artificial Intelligence, Knowledge Engineering and Data Bases
  • Year:
  • 2006

Quantified Score

Hi-index 0.00

Visualization

Abstract

We present an algorithm which is able to adjust a set of data with unknown characteristics. The algorithm is based on the simple notion that a sufficiently large set of examples will adequately embody the essence of a numerically expressible phenomenon, on the one hand, and, on the other, that it is possible to synthesize such essence via a polynomial of the form F(V1,..., Vn) = ΣI1=0D1... ΣIn=0Dn µI1...In CI1...In VI1D1...VInIn where the µI1...In may only take the values 0 or 1 depending on whether the corresponding monomial is adequate. In order to determine the adequateness of the monomial in case we resort to a genetic algorithm which minimizes the fitting error of the candidate polynomials. We analyze a set of selected data sets and find the best approximation polynomial. We compare our results with those stemming from multi-layer perceptron networks trained with the well known backpropagation algorithm (BMLPs). We show that our Genetic Mutivariate Plynomials (GMPs) compare favorably with the corresponding BMLPs without the "black box" characteristic of the latter; a frequently cited disadvantage. We also discuss the minimization (genetic) algorithm (GA).