Computer-based probabilistic-network construction
Computer-based probabilistic-network construction
Real-world applications of Bayesian networks
Communications of the ACM
Programmable active memories: reconfigurable systems come of age
IEEE Transactions on Very Large Scale Integration (VLSI) Systems
IEEE Transactions on Pattern Analysis and Machine Intelligence
Journal of the ACM (JACM)
Reconfigurable computing: a survey of systems and software
ACM Computing Surveys (CSUR)
Parameterized floating-point logarithm and exponential functions for FPGAs
Microprocessors & Microsystems
Reconfigurable computing for learning Bayesian networks
Proceedings of the 16th international ACM/SIGDA symposium on Field programmable gate arrays
ParaLearn: a massively parallel, scalable system for learning interaction networks on FPGAs
Proceedings of the 24th ACM International Conference on Supercomputing
High-throughput Bayesian network learning using heterogeneous multicore computers
Proceedings of the 24th ACM International Conference on Supercomputing
Hi-index | 0.00 |
Using a Bayesian network (BN) learned from data can aid in diagnosing and predicting failures within a system while achieving other capabilities such as the monitoring of a system. However, learning a BN requires computationally intensive processes. This makes BN learning a candidate for acceleration using reconfigurable hardware such as field-programmable gate arrays (FPGAs). We present a FPGA-based implementation of BN learning using particle-swarm optimization (PSO). This design thus occupies the intersection of three areas: reconfigurable computing, BN learning, and PSO. There is significant prior work in each of these three areas. Indeed, there are examples of prior work in each pair among the three. However, the present work is the first to study the combination of all three. As a baseline, we use a prior software implementation of BN learning using PSO. We compare this to our FPGA-based implementation to study trade-offs in terms of performance and cost. Both designs use a master-slave topology and floating-point calculations for the fitness function. The performance of the FPGA-based version is limited not by the fitness function, but rather by the construction of conditional probability tables (CPTs). The CPT construction only requires integer calculations. We exploit this difference by separating these two functions into separate clock domains. The FPGA-based solution achieves about 2.6 times the number of fitness evaluations per second per slave compared to the software implementation.