Genetic optimization of art neural network architectures

  • Authors:
  • Assem Kaylani;Michael Georgiopoulos;Mansooreh Mollaghasemi;Georgios Anagnostopoulos

  • Affiliations:
  • University of Central Florida, Orlando, FL;University of Central Florida, Orlando, FL;University of Central Florida, Orlando, FL;Florida Institute of Technology, Melbourne, FL

  • Venue:
  • ASC '07 Proceedings of The Eleventh IASTED International Conference on Artificial Intelligence and Soft Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

Adaptive Resonance Theory (ART) neural network architectures, such as Fuzzy ARTMAP (FAM), Ellipsoidal ARTMAP (EAM), and Gaussian ARTMAP (GAM), have solved successfully a variety of classification problems. However, they suffer from an inherent ART problem, that of creating larger architectures than it is necessary to solve the problem at hand (referred to as the ART category proliferation problem). This problem is especially amplified for classification problems which have noisy data, and/or data, belonging to different labels, that significantly overlap. A variety of modified ART architectures, referred to as semisupervised (ss) ART architectures (e.g., ssFAM, ssEAM, ssGAM), summarily referred to as ssART, have addressed the category proliferation problem. In this paper, we are proposing another approach of solving the ART category proliferation problem, by designing genetically engineered ART architectures, such as GFAM, GEAM, GGAM, summarily referred to as GART. In particular, in this paper, we explain how to design GART architectures and compare their performance (in terms of accuracy, size, and computational complexity) with the performance of the ssART architectures. Our results demonstrate that GART is superior to ssART, and quite often it produces the optimal classifier.