Novelty of behaviour as a basis for the neuro-evolution of operant reward learning
Proceedings of the 11th Annual conference on Genetic and evolutionary computation
Indirectly encoding neural plasticity as a pattern of local rules
SAB'10 Proceedings of the 11th international conference on Simulation of adaptive behavior: from animals to animats
Evolving spiking networks with variable memristors
Proceedings of the 13th annual conference on Genetic and evolutionary computation
Evolving spiking networks with variable memristors
ACM SIGEVOlution
Evolving spiking networks with variable resistive memories
Evolutionary Computation
Hi-index | 0.00 |
Artificial Neural Networks for online learning problems are often implemented with synaptic plasticity to achieve adaptive behaviour. A common problem is that the overall learning dynamics are emergent properties strongly dependent on the correct combination of neural architectures, plasticity rules and environmental features. Which complexity in architectures and learning rules is required to match specific control and learning problems is not clear. Here a set of homosynaptic plasticity rules is applied to topologically unconstrained neural controllers while operating and evolving in dynamic reward-based scenarios. Performances are monitored on simulations of bee foraging problems and T-maze navigation. Varying reward locations compel the neural controllers to adapt their foraging strategies over time, fostering online reward-based learning. In contrast to previous studies, the results here indicate that reward-based learning in complex dynamic scenarios can be achieved with basic plasticity rules and minimal topologies.