Communications of the ACM
Computer Vision, Graphics, and Image Processing - Special issue on human and machine vission, part II
Fractals everywhere
Chaos: making a new science
Multilayer feedforward networks are universal approximators
Neural Networks
Learnability and the Vapnik-Chervonenkis dimension
Journal of the ACM (JACM)
What size net gives valid generalization?
Neural Computation
Proceedings of the third international conference on Genetic algorithms
Fractally configured neural networks
Neural Networks
Dr. Dobb's Journal
Original Contribution: CALM: Categorizing and learning module
Neural Networks
The significance of real neuron architectures for neural network simulations
Computational neuroscience
Genetic Algorithms in Search, Optimization and Machine Learning
Genetic Algorithms in Search, Optimization and Machine Learning
When Both Individuals and Populations Search: Adding Simple Learning to the Genetic Algorithm
Proceedings of the 3rd International Conference on Genetic Algorithms
Towards the Genetic Synthesisof Neural Networks
Proceedings of the 3rd International Conference on Genetic Algorithms
Journal of Cognitive Neuroscience
Neocortical expansion: An attempt toward relating phylogeny and ontogeny
Journal of Cognitive Neuroscience
Transputers and neural networks: an analysis of implementation constraints and performance
IEEE Transactions on Neural Networks
Rough-Fuzzy MLP: Modular Evolution, Rule Generation, and Evaluation
IEEE Transactions on Knowledge and Data Engineering
Optimal Modular Feedforward Neural Nets Based on Functional Network Architectures
IWANN '01 Proceedings of the 6th International Work-Conference on Artificial and Natural Neural Networks: Connectionist Models of Neurons, Learning Processes and Artificial Intelligence-Part I
Self-organised evolutionary neural networks: algorithms and applications
Highly parallel computaions
MIGA, A Software Tool for Nonlinear System Modelling with Modular Neural Networks
Applied Intelligence
Learning using distance based training algorithm for pattern recognition
Pattern Recognition Letters
A Computational Study into the Evolution of Dual-Route Dynamics for Affective Processing
Journal of Cognitive Neuroscience
NN'09 Proceedings of the 10th WSEAS international conference on Neural networks
A novel Bayesian learning method for information aggregation in modular neural networks
Expert Systems with Applications: An International Journal
Option pricing with modular neural networks
IEEE Transactions on Neural Networks
OTM'05 Proceedings of the 2005 OTM Confederated international conference on On the Move to Meaningful Internet Systems
Automatic task decomposition for the neuroevolution of augmenting topologies (NEAT) algorithm
EvoBIO'12 Proceedings of the 10th European conference on Evolutionary Computation, Machine Learning and Data Mining in Bioinformatics
Modular Neural Tile Architecture for Compact Embedded Hardware Spiking Neural Network
Neural Processing Letters
Hi-index | 0.00 |
To investigate the relations between structure and function in both artificial and natural neural networks, we present a series of simulations and analyses with modular neural networks. We suggest a number of design principles in the form of explicit ways in which neural modules can cooperate in recognition tasks. These results may supplement recent accounts of the relation between structure and function in the brain. The networks used consist of several modules, standard subnetworks that serve as higher order units with a distinct structure and function. The simulations rely on a particular network module called the categorizing and learning module. This module, developed mainly for unsupervised categorization and learning, is able to adjust its local learning dynamics. The way in which modules are interconnected is an important determinant of the learning and categorization behaviour of the network as a whole. Based on arguments derived from neuroscience, psychology, computational learning theory, and hardware implementation, a framework for the design of such modular networks is presented. A number of small-scale simulation studies shows how intermodule connectivity patterns implement ''neural assemblies'' that induce a particular category structure in the network. Learning and categorization improves because the induced categories are more compatible with the structure of the task domain. In addition to structural compatibility, two other principles of design are proposed that underlie information processing in interactive activation networks: replication and recurrence. Because a general theory for relating network architectures to specific neural functions does not exist, we extend the biological metaphor of neural networks, by applying genetic algorithms (a biocomputing method for search and optimization based on natural selection and evolution) to search for optimal modular network architectures for learning a visual categorization task. The best performing network architectures seemed to have reproduced some of the overall characteristics of the natural visual system, such as the organization of coarse and fine processing of stimuli in separate pathways. A potentially important result is that a genetically defined initial architecture cannot only enhance learning and recognition performance, but it can also induce a system to better generalize its learned behaviour to instances never encountered before. This may explain why for many vital learning tasks in organisms only a minimal exposure to relevant stimuli is necessary.