Decomposition of Knowledge for Concurrent Processing

  • Authors:
  • Gilbert Babin;Cheng Hsu

  • Affiliations:
  • -;-

  • Venue:
  • IEEE Transactions on Knowledge and Data Engineering
  • Year:
  • 1996

Quantified Score

Hi-index 0.00

Visualization

Abstract

In some environments, it is more difficult for distributed systems to cooperate. In fact, some distributed systems are highly heterogeneous and might not readily cooperate. In order to alleviate these problems, we have developed an environment that preserves the autonomy of the local systems, while enabling distributed processing. This is achieved by 1) modeling the different application systems into a central knowledge base (called a Metadatabase), 2) providing each application system with a local knowledge processor, and 3) distributing the knowledge within these local shells. This paper is concerned with describing the knowledge decomposition process used for its distribution. The decomposition process is used to minimize the needed cooperation among the local knowledge processors, and is accomplished by "serializing" the rule execution process. A rule is decomposed into a ordered set of subrules, each of which is executed in sequence and located in a specific local knowledge processor. The goals of the decomposition algorithm are to minimize the number of subrules produced, hence reducing the time spent in communication, and to assure that the sequential execution of the subrules is "equivalent" to the execution of the original rule.