A Partitioning Strategy for Nonuniform Problems on Multiprocessors
IEEE Transactions on Computers
On mapping parallel algorithms into parallel architectures
Journal of Parallel and Distributed Computing
Journal of Parallel and Distributed Computing
MPI: The Complete Reference
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems
Optimal Processor Assignment for a Class of Pipelined Computations
IEEE Transactions on Parallel and Distributed Systems
Efficient Algorithms for Array Redistribution
IEEE Transactions on Parallel and Distributed Systems
Performance Metrics for Embedded Parallel Pipelines
IEEE Transactions on Parallel and Distributed Systems
An Efficient Algorithm for Large-Scale Matrix Transposition
ICPP '00 Proceedings of the Proceedings of the 2000 International Conference on Parallel Processing
RapidIO for radar processing in advanced space systems
ACM Transactions on Embedded Computing Systems (TECS)
Optimizing rapidIO architectures for onboard processing
ACM Transactions on Embedded Computing Systems (TECS)
Optimizing latency and throughput of application workflows on clusters
Parallel Computing
Exploiting throughput for pipeline execution in streaming image processing applications
Euro-Par'06 Proceedings of the 12th international conference on Parallel Processing
ALPS: a software framework for parallel space-time adaptive processing
PARA'04 Proceedings of the 7th international conference on Applied Parallel Computing: state of the Art in Scientific Computing
Hi-index | 0.00 |
This paper presents performance results for the design and implementation of parallel pipelined Space-Time Adaptive Processing (STAP) algorithms on parallel computers. In particular the paper describes the issues involved in parallelization, our approach to parallelization and performance results on an Intel Paragon. The paper also discusses the process of developing software for such an application on parallel computers when latency and throughput are both considered together and presents tradeoffs considered with respect to inter and intra-task communication and data redistribution. The results show that not only scalable performance was achieved for individual component tasks of STAP but linear speedups were obtained for the integrated task performance, both for latency as well as throughput. Results are presented for up to 236 compute nodes (limited by the machine size available to us). Another interesting observation made from the implementation results is that performance improvement due to the assignment of additional processors to one task can improve the performance of other tasks without any increase in the number of processors assigned to them. Normally, this cannot be predicted by theoretical analysis.