Parallel test generation for sequential circuits on general-purpose multiprocessors
DAC '91 Proceedings of the 28th ACM/IEEE Design Automation Conference
Concurrent fault simulation of logic gates and memory blocks on message passing multicomputers
DAC '92 Proceedings of the 29th ACM/IEEE Design Automation Conference
Parallel algorithms for VLSI computer-aided design
Parallel algorithms for VLSI computer-aided design
Using MPI: portable parallel programming with the message-passing interface
Using MPI: portable parallel programming with the message-passing interface
Zamlog: a parallel algorithm for fault simulation based on Zambezi
Proceedings of the 1996 IEEE/ACM international conference on Computer-aided design
A parallel algorithm for fault simulation based on PROOFS
ICCD '95 Proceedings of the 1995 International Conference on Computer Design: VLSI in Computers and Processors
Overcoming the Serial Logic Simulation Bottleneck in Parallel Fault Simulation
VLSID '97 Proceedings of the Tenth International Conference on VLSI Design: VLSI in Multimedia Applications
Automatic test generation using genetically-engineered distinguishing sequences
VTS '96 Proceedings of the 14th IEEE VLSI Test Symposium
ZAMBEZI: a parallel pattern parallel fault sequential circuit fault simulator
VTS '96 Proceedings of the 14th IEEE VLSI Test Symposium
SPITFIRE: scalable parallel algorithms for test set partitioned fault simulation
VTS '97 Proceedings of the 15th IEEE VLSI Test Symposium
Parallel algorithms for power estimation
DAC '98 Proceedings of the 35th annual Design Automation Conference
Hi-index | 0.00 |
We propose two new asynchronous parallel algorithms for test set partitioned fault simulation. The algorithms are based on a new two-stage approach to parallelizing fault simulation for sequential VLSI circuits in which the test set is partitioned among the available processors. These algorithms provide the same result as the previous synchronous two stage approach. However, due to the dynamic characteristics of these algorithms and due to the fact that there is very minimal redundant work, they run faster than the previous synchronous approach. A theoretical analysis comparing the various algorithms is also given to provide an insight into these algorithms. The implementations were done in MPI and are therefore portable to many parallel platforms. Results are shown for a shared memory multiprocessor.