Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic reasoning in intelligent systems: networks of plausible inference
Probabilistic Graphical Models: Principles and Techniques - Adaptive Computation and Machine Learning
High-throughput Bayesian network learning using heterogeneous multicore computers
Proceedings of the 24th ACM International Conference on Supercomputing
High-throughput Bayesian network learning using heterogeneous multicore computers
Proceedings of the 24th ACM International Conference on Supercomputing
Bridging the GPGPU-FPGA efficiency gap
Proceedings of the 19th ACM/SIGDA international symposium on Field programmable gate arrays
Exploring many-core design templates for FPGAs and ASICs
International Journal of Reconfigurable Computing - Special issue on Selected Papers from the International Conference on Reconfigurable Computing and FPGAs (ReConFig'10)
FPGA implementation of particle swarm optimization for Bayesian network learning
Computers and Electrical Engineering
Hi-index | 0.00 |
ParaLearn is a scalable, parallel FPGA-based system for learning interaction networks using Bayesian statistics. ParaLearn includes problem specific parallel/scalable algorithms, system software and hardware architecture to address this complex problem. Learning interaction networks from data uncovers causal relationships and allows scientists to predict and explain a system's behavior. Interaction networks have applications in many fields, though we will discuss them particularly in the field of personalized medicine where state of the art high-throughput experiments generate extensive data on gene expression, DNA sequencing and protein abundance. In this paper we demonstrate how ParaLearn models Signaling Networks in human T-Cells. We show greater than 2000 fold speedup on a Field Programmable Gate Array when compared to a baseline conventional implementation on a General Purpose Processor (GPP), a 2.38 fold speedup compared to a heavily optimized parallel GPP implementation, and between 2.74 and 6.15 fold power savings over the optimized GPP. Through using current generation FPGA technology and caching optimizations, we further project speedups of up to 8.15 fold, relative to the optimized GPP. Compared to software approaches, ParaLearn is faster, more power efficient, and can support novel learning algorithms that substantially improve the precision and robustness of the results.