Design space exploration and parameter tuning for neuromorphic applications

  • Authors:
  • Kristofor D. Carlson;Nikil Dutt;Jayram M. Nageswaran;Jeffrey L. Krichmar

  • Affiliations:
  • University of California, Irvine, California;University of California, Irvine, California;Brain Corporation, San Diego, California;University of California, Irvine, California

  • Venue:
  • Proceedings of the Ninth IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and System Synthesis
  • Year:
  • 2013

Quantified Score

Hi-index 0.00

Visualization

Abstract

Large-scale spiking neural networks (SNNs) have been used to successfully model complex neural circuits that explore various neural phenomena such as learning and memory, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware as spiking events are often sparse, leading to a potentially large reduction in both bandwidth requirements and power usage. The inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs but has also made the task of tuning these biologically realistic SNNs difficult. We present an automated parameter-tuning framework capable of tuning large-scale SNNs quickly and efficiently using evolutionary algorithms (EA) and off-the-shelf graphics processing units (GPUs). To test the feasibility of an automated parameter-tuning framework, our group used EAs to tune open parameters in SNNs running concurrently on a GPU. The SNNs were evolved to produce orientation-dependent stimulus responses similar to those found in simple cells of the primary visual cortex (V1) through the formation of self-organizing receptive fields (SORFs). The general evolutionary approach was as follows: A population of neural networks was created, each with a unique set of neural parameter values that defined overall behavior. Each SNN was then ranked based on a fitness value assigned by an objective function in which higher fitness values were given to SNNs that (a) reproduced responses observed in primate visual cortex, and (b) spanned the stimulus space, and (c) had sparse firing rates. The highest ranked individuals were selected, recombined, and mutated to form the offspring for the next generation. This process continued until a desired fitness was reached or until other termination conditions were met (Figure 1a). The automated parameter-tuning framework consisted of three software packages. The framework included: (a) the CARLsim SNN simulator [1], (2) the Evolving Objects (EO) computational framework [2], and (3) a parameter-tuning interface (PTI), developed by our group, to provide an interface between CARLsim and EO (See Figure 1b). The EO computational framework ran the evolutionary algorithm on the user-designated parameters of SNNs in CARLsim. The PTI allowed the objective function to be calculated independent of the EO computation framework. Parameter values were passed from the EO computation framework through the PTI to the SNN in CARLsim where the objective function is calculated. After the objective function was executed, the results were passed from the SNN in CARLsim through the PTI back to the EO computation framework for processing by the EA. With this approach, the fitness function calculation, which involved running each SNN in the population, could be run in parallel on the GPU while the remainder of EA calculations can be performed using the CPU (Figure 1b). A sample SNN with 4,104 neurons was tuned to respond with V1 simple cell-like tuning curves and produce SORFs. A performance analysis comparing the GPU-accelerated implementation to a single-threaded CPU implementation was carried out and showed that the GPU implementation could achieve a 65 times speedup over the CPU implementation. Additionally, the parameter value solutions found in the tuned SNN were stable and robust. The automated parameter-tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.