Parallelism in Logic Programs
Raghu Ramakrishnan
1989
There is a tension between the objectives of avoiding irrelevant computation and extracting parallelism, in that a computational step used to restrict another must precede the latter. Our thesis, following [BeR87], is that evaluation methods can be viewed as implementing a choice of sideways information propagation graphs, or sips, which determines the set of goals and facts that must be evaluated. Two evaluation methods that implement the same sips can then be compared to see which obtains a greater degree of parallelism, and we provide a formal measure of parallelism to make this comparison. Using this measure, we prove that transforming a program using the Magic Templates algorithm and then evaluating the fixpoint bottom-up provides a ‘most parallel’ implementation for a given choice of sips, without taking resource constraints into account. This result, taken in conjunction with earlier results from [BeR87, Ra88], which show that bottom-up evaluation performs no irrelevant computation and is sound and complete, suggests that a bottom-up approach to parallel evaluation of logic programs is very promising. A more careful analysis of the relative overheads in the top-down and bottom-up evaluation paradigms is needed, however, and we discuss some of the issues. The abstract model allows us to establish several results comparing other proposed parallel evaluation met6hods in the logic programming and deductive database literature, thereby showing some natural, and sometimes surprising, connections. We consider the limitations of the abstract model and of the proposed bottom-up evaluation method, including the inability of sips to describe certain evaluation methods, and the effect of resource constraints. Our results shed light on t6he limits of the sip paradigm of computation, which we extend in the process.
Download this report (PDF)
Return to tech report index
|