Separable image warping with spatial lookup tables
SIGGRAPH '89 Proceedings of the 16th annual conference on Computer graphics and interactive techniques
The Stanford Dash Multiprocessor
Computer
Practical software metrics for project management and process improvement
Practical software metrics for project management and process improvement
The generalized Hough transform on mesh-connected computers
Journal of Parallel and Distributed Computing
IEEE Transactions on Software Engineering
A pilot study to compare programming effort for two parallel programming models
Journal of Systems and Software
A priori implementation effort estimation for hardware design based on independent path analysis
EURASIP Journal on Embedded Systems - Operating System Support for Embedded Real-Time Applications
Trasgo: a nested-parallel programming system
The Journal of Supercomputing
Hi-index | 0.00 |
Several parallel programming languages, libraries and environments have been developed to ease the task of writing programs for multiprocessors. Proponents of each approach often point out various language features that are designed to provide the programmer with a simple programming interface. However, virtually no data exists that quantitatively evaluates the relative ease of use of different parallel programming languages. The following paper borrows techniques from the software engineering field to quantify the complexity of three predominate programming models: shared memory, message passing and High-Performance Fortran. It is concluded that traditional software complexity metrics are effective indicators of the relative complexity of parallel programming languages. The impact of complexity on run-time performance is also discussed in the context of message-passing versus HPF on an IBM SP2.