Imagine that you have a bunch of join-semilattices with bottom L_i, and a big family of monotone maps f_ij between them, forming a complete directed graph. If you want to leave an edge out of the graph, you can easily represent that with one of the edges being the constantly-bottom function. Then the idea is to find a least fixpoint of the whole system, to find for every i an l_i ∈ L_i such that l_i is the join over all j of f_ij(l_j).

The concrete example pertinent to logic programming is where the join-semilattices in question are just the boolean algebras of sets of propositions that have been proved so far; join is union, bottom is empty set. Finding a fixpoint is saturation.

Since these aren't necessarily

*complete*lattices, there is not necessarily a least fixpoint (consider the logic program triv(z). triv(s N) :- triv N.) and so we don't necessarily find a least fixpoint by simply iterating, but if we ever terminate we've found one.

The observation that distributed forward logic programming sometimes is effective is the observation that sometimes the least fixpoints on a single monolithic join-semilattice L with a monotone function f : L → L (i.e. the

*non*distributed case) are isomorphic to the least fixpoints of a bunch of little join-semilattices L_i. It's basically certain also that L factors apart as the cartesian product of the L_i.