Shared domains of competence of approximate learning models using measures of separability of classes

  • Authors:
  • Julián Luengo;Francisco Herrera

  • Affiliations:
  • Dept. of Computer Science and Artificial Intelligence, CITIC-UGR, 18071 Granada, Spain;Dept. of Computer Science and Artificial Intelligence, CITIC-UGR, 18071 Granada, Spain

  • Venue:
  • Information Sciences: an International Journal
  • Year:
  • 2012

Quantified Score

Hi-index 0.07

Visualization

Abstract

In this work we jointly analyze the performance of three classic Artificial Neural Network models and one Support Vector Machine with respect to a series of data complexity measures known as measures of separability of classes. In particular, we consider a Radial Basis Function Network, a Multi-Layer Perceptron, a Learning Vector Quantization, while the Sequential Minimal Optimization method is used to model the Support Vector Machine. We consider five measures of separability of classes over a wide range of data sets built from real data which have proved to be very discriminative when analyzing the performance of classifiers. We find that two of them allow us to extract common behavior patterns for the four learning methods due to their related nature. We obtain rules using these two metrics that describe both good or bad performance of the Artificial Neural Networks and the Support Vector Machine. With the obtained rules, we characterize the performance of the methods from the data set complexity metrics and therefore their common domains of competence are established. Using these domains of competence the shared good and bad capabilities of these four models can be used to know if the approximative models will perform well or poorly or if a more complex configuration of the model is needed for a given problem in advance.