Automated neuron model optimization techniques: a review

  • Authors:
  • W. Van Geit;E. De Schutter;P. Achard

  • Affiliations:
  • Okinawa Institute of Science and Technology, Computational Neuroscience Unit, 7542 Onna, Onna-Son, 904-0411, Okinawa, Japan and University of Antwerp, Theoretical Neurobiology, Universiteitsplein ...;Okinawa Institute of Science and Technology, Computational Neuroscience Unit, 7542 Onna, Onna-Son, 904-0411, Okinawa, Japan and University of Antwerp, Theoretical Neurobiology, Universiteitsplein ...;Brandeis University, Volen Center for Complex System, 415 South Street, 02454, Waltham, MA, USA

  • Venue:
  • Biological Cybernetics - Special Issue: Quantitative Neuron Modeling
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

The increase in complexity of computational neuron models makes the hand tuning of model parameters more difficult than ever. Fortunately, the parallel increase in computer power allows scientists to automate this tuning. Optimization algorithms need two essential components. The first one is a function that measures the difference between the output of the model with a given set of parameter and the data. This error function or fitness function makes the ranking of different parameter sets possible. The second component is a search algorithm that explores the parameter space to find the best parameter set in a minimal amount of time. In this review we distinguish three types of error functions: feature-based ones, point-by-point comparison of voltage traces and multi-objective functions. We then detail several popular search algorithms, including brute-force methods, simulated annealing, genetic algorithms, evolution strategies, differential evolution and particle-swarm optimization. Last, we shortly describe Neurofitter, a free software package that combines a phase–plane trajectory density fitness function with several search algorithms.