Practical methods of optimization; (2nd ed.)
Practical methods of optimization; (2nd ed.)
What size net gives valid generalization?
Neural Computation
LAPACK's user's guide
ScaLAPACK user's guide
Using neural networks for data mining
Future Generation Computer Systems - Special double issue on data mining
The globus project: a status report
Future Generation Computer Systems - Special issue on metacomputing
Understanding performance of SMP clusters running MPI programs
Future Generation Computer Systems
MPICH-G2: a Grid-enabled implementation of the Message Passing Interface
Journal of Parallel and Distributed Computing - Special issue on computational grids
LAPACK Working Note 94: A User''s Guide to the BLACS v1.0
LAPACK Working Note 94: A User''s Guide to the BLACS v1.0
A Framework for Grid-based Neural Networks
DFMA '05 Proceedings of the First International Conference on Distributed Frameworks for Multimedia Applications
A grid based neural network execution service
PDCN'06 Proceedings of the 24th IASTED international conference on Parallel and distributed computing and networks
Block size selection of parallel LU and QR on PVP-based and RISC-based supercomputers
CHINA HPC '07 Proceedings of the 2007 Asian technology information program's (ATIP's) 3rd workshop on High performance computing in China: solution approaches to impediments for high performance computing
Validity of the single processor approach to achieving large scale computing capabilities
AFIPS '67 (Spring) Proceedings of the April 18-20, 1967, spring joint computer conference
Adaptive self-scaling non-monotone BFGS training algorithm for recurrent neural networks
ICANN'07 Proceedings of the 17th international conference on Artificial neural networks
Parallel batch pattern BP training algorithm of recurrent neural network
INES'10 Proceedings of the 14th international conference on Intelligent engineering systems
ICANN'10 Proceedings of the 20th international conference on Artificial neural networks: Part III
A novel distributed machine learning method for classification: parallel covering algorithm
RSKT'12 Proceedings of the 7th international conference on Rough Sets and Knowledge Technology
Hi-index | 0.00 |
Artificial Neural Nets are among the most commonly used methods in high-energy applications for data pre-processing. The training phase of the ANN is critical in obtaining a net that can generalize the available data for use in new situations. However, from the computational viewpoint this phase is very costly and resource intensive. Therefore, the aim of this work is to parallelize and evaluate the performance and scalability of the kernel of a training algorithm of a multilayer perceptron artificial neural net used for analyzing data from the Large Electron Positron Collider at CERN. The training methods selected were linear-BFGS and hybrid linear-BFGS. Different approaches for the parallel implementation will be presented and evaluated in this paper. In order to perform a complete performance and scalability evaluation of the proposed approach, three different parallel architectures will be used: A shared memory multiprocessor, a cluster and a grid environment.