2008 Special Issue: The state of MIIND

  • Authors:
  • Marc de Kamps;Volker Baier;Johannes Drever;Melanie Dietz;Lorenz Mösenlechner;Frank van der Velde

  • Affiliations:
  • Biosystems Group, School of Computing, University of Leeds, LS2 9JT Leeds, United Kingdom;Neuro-Cognitive Psychology, Ludwig-Maximilians Universität München, Leopoldstrasse 13, München, Germany;Robotics and Embedded Systems, Institut für Informatik, Technische Universität München, Boltzmannstrasse 3, D-85748 Garching bei München, Germany;Robotics and Embedded Systems, Institut für Informatik, Technische Universität München, Boltzmannstrasse 3, D-85748 Garching bei München, Germany;Image Understanding & Knowledge-Based Systems, Institut füür Informatik, Technische Universität München, Boltzmannstrasse 3, D-85748 Garching bei München, Germany;Leiden Institute for Brain and Cognition, Cognitive Psychology, Leiden University, Wassenaarseweg 52 2333 AK Leiden, The Netherlands

  • Venue:
  • Neural Networks
  • Year:
  • 2008

Quantified Score

Hi-index 0.00

Visualization

Abstract

MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson-Cowan and Ornstein-Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models.