Feature discovery by competitive learning
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
Learning internal representations by error propagation
Parallel distributed processing: explorations in the microstructure of cognition, vol. 1
A framework for the cooperation of learning algorithms
NIPS-3 Proceedings of the 1990 conference on Advances in neural information processing systems 3
Handbook of logic in computer science (vol. 1): background: mathematical structures
Handbook of logic in computer science (vol. 1): background: mathematical structures
Hierarchical mixtures of experts and the EM algorithm
Neural Computation
Improving regression estimation: Averaging methods for variance reduction with extensions to general convex measure optimization
Machine Learning
IEEE Transactions on Pattern Analysis and Machine Intelligence
Organization of face and object recognition in modular neural network models
Neural Networks - Special issue on organisation of computation in brain-like systems
Ensemble learning via negative correlation
Neural Networks
Formal verification of parallel programs
Communications of the ACM
Neural Networks for Pattern Recognition
Neural Networks for Pattern Recognition
A Calculus of Communicating Systems
A Calculus of Communicating Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Combining Artificial Neural Nets: Ensemble and Modular Multi-Net Systems
Additive Composition of Supervised Self-Organizing Maps
Neural Processing Letters
MCS '02 Proceedings of the Third International Workshop on Multiple Classifier Systems
A Theoretical and Experimental Analysis of Linear Combiners for Multiple Classifier Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Asymptotic Convergence Rate of the EM Algorithm for Gaussian Mixtures
Neural Computation
Managing Diversity in Regression Ensembles
The Journal of Machine Learning Research
A competitive neural model of small number detection
Neural Networks
Development of elementary numerical abilities: A neuronal model
Journal of Cognitive Neuroscience
Adaptive mixtures of local experts
Neural Computation
On convergence properties of the em algorithm for gaussian mixtures
Neural Computation
IEEE Transactions on Neural Networks
Face recognition system in a dynamical environment
IWANN'11 Proceedings of the 11th international conference on Artificial neural networks conference on Advances in computational intelligence - Volume Part II
A continuous learning in a changing environment
ICIAP'11 Proceedings of the 16th international conference on Image analysis and processing - Volume Part II
An hybrid system for continuous learning
HAIS'10 Proceedings of the 5th international conference on Hybrid Artificial Intelligence Systems - Volume Part II
Hi-index | 0.01 |
Multiple neural network systems have become popular techniques for tackling complex tasks, often giving improved performance compared to single network systems. For example, modular systems can provide improvements in generalisation through task decomposition, whereas multiple classifier and regressor systems typically improve generalisation through the ensemble combination of redundant networks. Whilst there has been significant focus on understanding the theoretical properties of some of these multi-net systems, particularly ensemble systems, there has been little theoretical work on understanding the properties of the generic combination of networks, important in developing more complex systems, perhaps even those a step closer to their biological counterparts. In this article, we provide a formal framework in which the generic combination of neural networks can be described, and in which the properties of the system can be rigorously analysed. We achieve this by describing multi-net systems in terms of partially ordered sets and state transition systems. By way of example, we explore an abstract version of learning applied to a generic multi-net system that can combine an arbitrary number of networks in sequence and in parallel. By using the framework we show with a constructive proof that, under specific conditions, if it is possible to train the generic system, then training can be achieved by the abstract technique described.