SOSP '89 Proceedings of the twelfth ACM symposium on Operating systems principles
An object-based programming model for shared data
ACM Transactions on Programming Languages and Systems (TOPLAS)
Models and languages for parallel computation
ACM Computing Surveys (CSUR)
What's next in high-performance computing?
Communications of the ACM - Ontology: different ways of representing the same concept
Efficient Java RMI for parallel programming
ACM Transactions on Programming Languages and Systems (TOPLAS)
Automatic data and computation decomposition on distributed memory parallel computers
ACM Transactions on Programming Languages and Systems (TOPLAS)
Fine-Grain Software Distributed Shared Memory on SMP Clusters
HPCA '98 Proceedings of the 4th International Symposium on High-Performance Computer Architecture
Applying Enterprise JavaBeans: Component-Based Development for the J2EE Platform
Applying Enterprise JavaBeans: Component-Based Development for the J2EE Platform
Using a Distributed Single Address Space Operating System to Support Modern Cluster Computing
HICSS '99 Proceedings of the Thirty-second Annual Hawaii International Conference on System Sciences-Volume 8 - Volume 8
Performance and Scalability Measurement of COTS EJB Technology
SBAC-PAD '02 Proceedings of the 14th Symposium on Computer Architecture and High Performance Computing
A Super-Programming Approach for Mining Association Rules in Parallel on PC Clusters
IEEE Transactions on Parallel and Distributed Systems
Robust scalability analysis and SPM case studies
The Journal of Supercomputing
Hi-index | 0.00 |
PC clusters have emerged as viable alternatives for high-performance, low-cost computing. In such an environment, sharing data among processes is essential. Accessing the shared data, however, may often stall parallel executing threads. We propose a novel data representation scheme where an application data entity can be incarnated into a set of objects that are distributed in the cluster. The runtime support system manages the incarnated objects and data access is possible only via an appropriate interface. This distributed data representation facilitates parallel accesses for updates. Thus, tasks are subject to few limitations and application programs can harness high degrees of parallelism. Our PC cluster experiments prove the effectiveness of our approach.