On Intelligence
Towards cortex sized artificial neural systems
Neural Networks
Learning invariant features using inertial priors
Annals of Mathematics and Artificial Intelligence
Anatomy of a cortical simulator
Proceedings of the 2007 ACM/IEEE conference on Supercomputing
Entering the petaflop era: the architecture and performance of Roadrunner
Proceedings of the 2008 ACM/IEEE conference on Supercomputing
A computational model of the cerebral cortex
AAAI'05 Proceedings of the 20th national conference on Artificial intelligence - Volume 2
On the prospects for building a working model of the visual cortex
AAAI'07 Proceedings of the 22nd national conference on Artificial intelligence - Volume 2
Hi-index | 0.00 |
Recent scientific studies of the brain have led to new models of information processing. Some of these models are based on Hierarchical Bayesian Networks and have several benefits over traditional neural networks. Large scale implementations of brain models have the potential for strong inference capabilities, and hierarchical Bayesian models lend themselves well to large scales. Multi-core processors are currently the standard architectural approach utilized for high performance computing platforms. In this paper we examine the parallelization and optimization of Dean's hierarchical Bayesian model onto two multi-core architectures: the nine-core IBM Cell and the quad-core Intel Xeon processors. This is the first study of the parallelization of this class of models onto multi-core processors. We evaluate two parallelization strategies and examine the performance of the model as it is scaled. Our results indicate that the Cell processor can provide speedups of up to 108 times over a serial implementation of the model for the network sizes examined. The quad-core Intel Xeon processor provided a speedup of 36 times for the same model configuration.