Analysis of Graphs by Connectivity Considerations
Journal of the ACM (JACM)
One-Pass compilation of arithmetic expressions for a parallel processor
Communications of the ACM
Procedure-oriented language statements to facilitate parallel processing
Communications of the ACM
Additional comments on a problem in concurrent programming control
Communications of the ACM
Solution of a problem in concurrent programming control
Communications of the ACM
Models of Pure time-sharing disciplines for resource allocation
ACM '69 Proceedings of the 1969 24th national conference
ACM '69 Proceedings of the 1969 24th national conference
A multiprocessor system design
AFIPS '63 (Fall) Proceedings of the November 12-14, 1963, fall joint computer conference
AFIPS '66 (Fall) Proceedings of the November 7-10, 1966, fall joint computer conference
A structural theory of machine diagnosis
AFIPS '67 (Spring) Proceedings of the April 18-20, 1967, spring joint computer conference
TRANQUIL: a language for an array processing computer
AFIPS '69 (Spring) Proceedings of the May 14-16, 1969, spring joint computer conference
Measurement based automatic analysis of FORTRAN programs
AFIPS '69 (Spring) Proceedings of the May 14-16, 1969, spring joint computer conference
A higher level language for micro-programming
MICRO 6 Conference record of the 6th annual workshop on Microprogramming
System Segmentation for the Parallel Diagnosis of Computers
IEEE Transactions on Computers
Scheduling Parallel Processable Tasks for a Uniprocessor
IEEE Transactions on Computers
Program Suitability for Parallel Processing
IEEE Transactions on Computers
Scheduling Heuristics in a Multiprogramming Environment
IEEE Transactions on Computers
Some Experiments in Local Microcode Compaction for Horizontal Machines
IEEE Transactions on Computers
An Expression Model for Extraction and Evaluation of Parallelism in Control Structures
IEEE Transactions on Computers
Toward Optimization of Horizontal Microprograms
IEEE Transactions on Computers
Compilation Techniques for Recognition of Parallel Processable Tasks in Arithmetic Expressions
IEEE Transactions on Computers
IEEE Transactions on Computers
Parallel Task Execution in a Decentralized System
IEEE Transactions on Computers
The Inhibition of Potential Parallelism by Conditional Jumps
IEEE Transactions on Computers
Percolation of Code to Enhance Parallel Dispatching and Execution
IEEE Transactions on Computers
Pipelining: the generalized concept and sequencing strategies
AFIPS '74 Proceedings of the May 6-10, 1974, national computer conference and exposition
A Comparison of Some Theoretical Models of Parallel Computation
IEEE Transactions on Computers
The Effect on Throughput of Multiprocessing in a Multiprogramming Environment
IEEE Transactions on Computers
Hi-index | 0.04 |
State-of-the-art advances---in particular, anticipated advances generated by LSI---have given fresh impetus to research in the area of parallel processing. The motives for parallel processing include the following: 1. Real-time urgency. Parallel processing can increase the speed of computation beyond the limit imposed by technological limitations. 2. Reduction of turnaround time of high priority jobs. 3. Reduction of memory and time requirements for "housekeeping" chores. The simultaneous but properly interlocked operations of reading inputs into memory and error checking and editing can reduce the need for large intermediate storages or costly transfers between members in a storage hierarchy. 4. An increase in simultaneous service to many users. In the field of the computer utility, for example, periods of peak demand are difficult to predict. The availability of spare processors enables an installation to minimize the effects of these peak periods. In addition, in the event of a system failure, faster computational speeds permit service to be provided to more users before the failure occurs. 5. Improved performance in a uniprocessor multi-programmed environment. Even in a uniprocessor environment, parallel processable segments of high priority jobs can be overlapped so that when one segment is waiting for I/O, the processor can be computing its companion segment. Thus an overall speed up in execution is achieved.