A priority-based distributed group mutual exclusion algorithm when group access is non-uniform

  • Authors:
  • Neeraj Mittal;Prajwal K. Mohan

  • Affiliations:
  • Department of Computer Science, The University of Texas at Dallas, Richardson, TX 75083, USA;Digital Home Group, Intel Corporation, Hillsboro, OR 97124, USA

  • Venue:
  • Journal of Parallel and Distributed Computing
  • Year:
  • 2007

Quantified Score

Hi-index 0.00

Visualization

Abstract

In the group mutual exclusion problem, each critical section has a type or a group associated with it. Processes requesting critical sections belonging to the same group (that is, of the same type) may execute their critical sections concurrently. However, processes requesting critical sections belonging to different groups (that is, of different types) must execute their critical sections in a mutually exclusive manner. Most algorithms for solving the group mutual exclusion problem that have been proposed so far in the literature treat all groups equally. This is quite acceptable if a process, at the time of making a request for critical section, selects a group for the critical section uniformly. However, if some groups are more likely to be selected than others, then better performance can be achieved by treating different groups in a different manner. In this paper, we propose an efficient algorithm for solving the group mutual exclusion problem when group selection probabilities are non-uniformly distributed. Our algorithm has a message complexity of 2n-1 per request for critical section, where n is the number of processes in the system. It has low synchronization delay of one message hop and low waiting time of two message hops. The maximum concurrency of our algorithm is n, which implies that if all processes have requested critical sections of the same type then all of them may execute their critical sections concurrently. Finally, the amortized message-size complexity of our algorithm is O(1). Our experimental results indicate that our algorithm outperforms the existing algorithms, whose complexity measures are comparable to that of ours, by as much as 50% in some cases.