Evolving artificial neural networks with generative encodings inspired by developmental biology

  • Authors:
  • Charles Ofria;Robert T. Pennock;Jeff Clune

  • Affiliations:
  • Michigan State University;Michigan State University;Michigan State University

  • Venue:
  • Evolving artificial neural networks with generative encodings inspired by developmental biology
  • Year:
  • 2010

Quantified Score

Hi-index 0.00

Visualization

Abstract

In this dissertation I investigate the difference between generative encodings and direct encodings for evolutionary algorithms. Generative encodings are inspired by developmental biology and were designed, in part, to increase the regularity of synthetically evolved phenotypes. Regularity is an important design principle in both natural organisms and engineered designs. The majority of this dissertation focuses on how the property of regularity enables a generative encoding to outperform direct encoding controls, and whether a bias towards regularity also hurts the performance of the generative encoding on some problems. I also report on whether researchers can bias the types of regularities produced by a generative encoding to accommodate user preferences. Finally, I study the degree to which a generative encoding produces another important design principle, modularity. Several previous studies have shown that generative encodings outperform direct encodings on highly regular problems. However, prior to this dissertation, it was not known how generative encodings compare to direct encodings on problems with different levels of regularity. On three different problems, I show that a generative encoding can exploit intermediate amounts of problem regularity, which enabled the generative encoding to increasingly outperform direct encoding controls as problem regularity increased. This performance gap emerged because the generative encoding produced regular artificial neural networks (ANNs) that produced regular behaviors. The ANNs evolved with the generative encoding contained a diverse array of complicated, regular neural wiring patterns, whereas the ANNs produced by a direct encoding control were irregular. I also document that the bias towards regularity can hurt a generative encoding on problems that have some amount of irregularity. I propose a new algorithm, called HybrID, wherein a generative encoding produces regular patterns and a direct encoding modifies those patterns to provide fitness-enhancing irregularities. HybrID outperformed a generative encoding alone on three problems for nearly all levels of regularity, which raises the question of whether generative encodings may ultimately excel not as stand-alone algorithms, but by being hybridized with a further process of irregular refinement. The results described so far document that a generative encoding can produce regular solutions. I then show that, at least for the generative encoding in this case study, it is possible to influence the types of regularities produced, which allows domain knowledge and preferences to be injected into the algorithm. I also investigated whether the generative encoding can produce modular solutions. I present the first documented case of this generative encoding producing a modular phenotype on a simple problem. However, the generative encoding's inability to create modularity on harder problems where modularity would have been beneficial suggests that more work is needed to increase the likelihood that this encoding produces modular ANNs in response to challenging, decomposable problems. Overall, this dissertation paints a more complete picture of generative encodings than prior studies. The general conclusion that can be drawn from this work is that generative encodings can produce some of the properties seen in complex, natural organisms, and will likely be an important part of our long-term goal of synthetically evolving phenotypes that approach the capability, intelligence, and complexity of their natural rivals.