Using MPI: portable parallel programming with the message-passing interface
Using MPI: portable parallel programming with the message-passing interface
Neural networks with dynamic synapses
Neural Computation
Understanding intelligence
A simple and stable numerical solution for the population density equation
Neural Computation
A model of active visual search with object-based attention guiding scan paths
Neural Networks - 2004 Special issue Vision and brain
Minimal Models of Adapted Neuronal Response to In Vivo–lLike Input Currents
Neural Computation
Journal of Cognitive Neuroscience
Spike-Timing-Dependent Plasticity in Balanced Random Networks
Neural Computation
Robust Object Recognition with Cortex-Like Mechanisms
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of Cognitive Neuroscience
Brain-scale simulation of the neocortex on the IBM Blue Gene/L supercomputer
IBM Journal of Research and Development
IJCAI'91 Proceedings of the 12th international joint conference on Artificial intelligence - Volume 1
A model for delay activity without recurrent excitation
ICANN'05 Proceedings of the 15th international conference on Artificial Neural Networks: biological Inspirations - Volume Part I
Handwritten digit recognition: applications of neural network chips and automatic learning
IEEE Communications Magazine
Hi-index | 0.00 |
MIIND (Multiple Interacting Instantiations of Neural Dynamics) is a highly modular multi-level C++ framework, that aims to shorten the development time for models in Cognitive Neuroscience (CNS). It offers reusable code modules (libraries of classes and functions) aimed at solving problems that occur repeatedly in modelling, but tries not to impose a specific modelling philosophy or methodology. At the lowest level, it offers support for the implementation of sparse networks. For example, the library SparseImplementationLib supports sparse random networks and the library LayerMappingLib can be used for sparse regular networks of filter-like operators. The library DynamicLib, which builds on top of the library SparseImplementationLib, offers a generic framework for simulating network processes. Presently, several specific network process implementations are provided in MIIND: the Wilson-Cowan and Ornstein-Uhlenbeck type, and population density techniques for leaky-integrate-and-fire neurons driven by Poisson input. A design principle of MIIND is to support detailing: the refinement of an originally simple model into a form where more biological detail is included. Another design principle is extensibility: the reuse of an existing model in a larger, more extended one. One of the main uses of MIIND so far has been the instantiation of neural models of visual attention. Recently, we have added a library for implementing biologically-inspired models of artificial vision, such as HMAX and recent successors. In the long run we hope to be able to apply suitably adapted neuronal mechanisms of attention to these artificial models.