Towards a methodology to search for near-optimal representations in classification problems

  • Authors:
  • Manuel del Valle;Beatriz Sánchez;Luis F. Lago-Fernández;Fernando J. Corbacho

  • Affiliations:
  • Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain;,Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain;,Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain;,Escuela Politécnica Superior, Universidad Autónoma de Madrid, Madrid, Spain

  • Venue:
  • IWINAC'05 Proceedings of the First international work-conference on the Interplay Between Natural and Artificial Computation conference on Artificial Intelligence and Knowledge Engineering Applications: a bioinspired approach - Volume Part II
  • Year:
  • 2005

Quantified Score

Hi-index 0.00

Visualization

Abstract

This paper provides a first step towards a methodology that allows the search for near-optimal representations in classification problems by combining feature transformations from an initial family of basis functions. The original representation for the problem data may not be the most appropriate, and therefore it might be necessary to search for a new representation space that is closer to the structure of the problem to be solved. The outcome of this search is critical for the successful solution of the problem. For instance, if the objective function has certain global statistical properties, such as periodicity, it will be hard for methods based on local pattern information to capture the underlying structure and, hence, afford generalization capabilities. Conversely, once this optimal representation is found, most of the problems may be solved by a linear method. Hence, the key is to find the proper representation. As a proof of concept we present a particular problem where the class distributions have a very intricate overlap on the space of original attributes. For this problem, the proposed algorithm finds a representation based on the trigonometric basis that provides a solution where some of the classical learning methods, e.g. multilayer perceptrons and decision trees, fail. The methodology is composed by a discrete search within the space of basis functions and a linear mapping performed by a Fisher discriminant. We play special emphasis on the first part. Finding the optimal combination of basis functions is a difficult problem because of its nongradient nature and the large number of possible combinations. We rely on the global search capabilities of a genetic algorithm to scan the space of function compositions.