Artificial Intelligence Review - Special issue on lazy learning
Completely Derandomized Self-Adaptation in Evolution Strategies
Evolutionary Computation
A derandomized approach to self-adaptation of evolution strategies
Evolutionary Computation
Fast learning in networks of locally-tuned processing units
Neural Computation
Evolutionary discriminant analysis
IEEE Transactions on Evolutionary Computation
Nearest neighbor pattern classification
IEEE Transactions on Information Theory
Hi-index | 12.05 |
Similarity between patterns is commonly used in many distance-based classification algorithms like KNN or RBF. Generalized Euclidean Distances (GED) can be optimized in order to improve the classification success rate in distance-based algorithms. This idea can be extended to any classification algorithm, because it can be shown that a GEDs is equivalent to a linear transformations of the dataset. In this paper, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) is applied to the optimization of linear transformations represented as matrices. The method has been tested on several domains and results show that the classification success rate can be improved for some of them. However, in some domains, diagonal matrices get higher accuracies than full square ones. In order to solve this problem, we propose in the second part of the paper to represent linear transformations by means of rotation angles and scaling factors, based on the Singular Value Decomposition theorem (SVD). This new representation solves the problems found in the former part.