The process allocation in parallel interpretation of logic programs (abstract only)

  • Authors:
  • Wen-Kai Chung;William B. Day

  • Affiliations:
  • Computer Science and Engineering Department, Auburn University, Alabama;Computer Science and Engineering Department, Auburn University, Alabama

  • Venue:
  • CSC '87 Proceedings of the 15th annual conference on Computer Science
  • Year:
  • 1987

Quantified Score

Hi-index 0.00

Visualization

Abstract

One of the most appealing characteristics of logic programs is the natural and abundant non-determinism of execution. This non-determinism allows a non-conventional computer to pursue highly parallel computation. Considerable effort has been expended exploiting this potential. The AND/OR Process Model in Conery's dissertation [1] and Concurrent Prolog by Shapiro [2] are two famous pioneering efforts in this area. They are frequently referenced in the literature as bases of improvement and comparison.In these models and their successors, a logic program is solved by a set of conceptually tree-structured processes. Each process is assumed to have a separate copy of the whole program and dynamically spawns dependant processes to solve subgoals. Eventually, a process can solve its goal by simply matching the goal with a unit clause in the program and reporting the solution (or a failure message) to its parent process.While these models faithfully create dynamic processes to solve parallel literals and achieve a high degree of parallelism, they also suffer several difficulties. First of all, they are all based on the traditional view that a program is a stream of machine instructions. Notably, the knowledge base semantics of a logic program is not considered. A logic program could be modified at run-time, which is semantically understood as knowledge base maintenance. The assert/retract “predicates” in Prolog are simple but typical examples of knowledge base maintenance. For an intelligent system, knowledge base maintenance means learning. These models fail to support this requirement effectively.Moreover, more parallelism does not necessarily mean greater speed. In a multi-processing environment, the overhead of control and communication to distribute subtasks, to coordinate them and to collect results is typically very high. Only when the size of a subtask is larger than the overhead can we consider such distribution favorable. A logic program has poor locality. It also requires each process to remember a large amount of administrative information even after a subgoal is solved, and it demands excessive interprocess communication to reference variable binding information. These indicate that there is a large overhead associated with the parallel execution based on these models. But each subtask assigned to a process is comparatively simple: the unification between a goal and a clause is just a string matching process. We summarize these observations and conclude that dynamic process allocation does not seem efficient for logic programs.A distributed computation model based on static process allocation is proposed. A logic program is partitioned prior to execution, and those logically related program clauses are physically grouped together and allocated to the same process. Thus, the relationships among processes are actually relationships among their knowledge bases, which are static and known at the time of process allocation. Each process handles all the requests of knowledge base maintenance or knowledge deduction (normal execution) directed to its local knowledge base. Each process references only its local knowledge base and communicates with a few pre-determined processes whose knowledge bases are logically related. Newly derived knowledge in each process may be inserted into the local knowledge base to improve its performance in the future as a simple level of learning. Inconsistent information in a knowledge base may also be resolved locally as a deeper level of learning. This model increases the power and intelligence of each process and thus improves locality, reduces the requirements of interprocess communication and provides opportunities for machine learning.A prototype system which simulates this static process allocation model for parallel interpretation of subset Prolog programs is being implemented in Ada. This prototype system may be used for further studying different schemes of program partitioning, allocation and the resulting influences on machine learning and system performance.