11.3 Strong Propagators for Capacity Constraints

In this section we introduce the ideas for stronger propagation employed for capacity constraints in Oz.

First we show the weakness of the propagators we have introduced so far. We consider three tasks A, B and C, each with duration 8 and with the domain 1\#10. If we state for the pairs (A,B), (A,C) and (B,C) that the contained tasks must not overlap in time by using reified constraints or by applying Schedule.serializedDisj, no further propagation will take place. This is due to the local reasoning on task pairs. For each pair no value in the corresponding domains can be discarded. On the other hand, the tasks must be scheduled between time point 1 and 18 (the latest completion time of either A, B or C). But because the overall duration is 24, this is impossible.

Hence, we will use stronger propagators reasoning simultaneously on the whole set of tasks on a resource. The principal ideas behind this reasoning are simple but very powerful. First, for an arbitrary set of tasks S to be scheduled on the same resource, the available time must be sufficient (see the example above). Furthermore, we check whether a task T in math/S/ must be scheduled as the first or last task of S (and analogously if T is not in S).

We introduce the following abbreviations for a task T.

\Est{T}

least possible start time for T

\Lst{T}

largest possible start time for T

\Ect{T}

earliest completion time for T, i. e. \Ect{T} = \Est{T} + \Dur{T}

\Lct{T}

latest possible completion time for T, i. e., \Lct{T} =\Lst{T} + \Dur{T}

For a set S of tasks we define

\Est{S} =  \min(\{\Est{T}|\ T \in S\})

\Lct{S} = \max(\{\Lct{T}|\ T \in S\})

\Dur{S} = \sum\limits_{T \in S} \Dur{T}

If the condition

\Lct{S} - \Est{S} > \Dur{S}

holds, no schedule of the tasks in S can exist. A strong propagator for capacity constraints fails in this case.

Now we introduce some domain reductions by considering a task T and a set of tasks S where T does not occur in S. Assume that we can show that T cannot be scheduled after all tasks in S and that T canot be scheduled between two tasks in S (if S contains at least two tasks). In this case we can conclude that T must be scheduled before all tasks in S.

More formally, if

\Lct{S} - \Est{S} < \Dur{S} + \Dur{T}

holds, T cannot be scheduled between \Lct{S} and \Est{S} (it cannot be scheduled between two tasks of S if S contains at least two tasks). If

\Lct{T} - \Est{S} < \Dur{S} + \Dur{T}

holds, T cannot be scheduled after all tasks in S. Hence, if both conditions hold, T must be scheduled before all tasks of S and corresponding propagators can be imposed, narrowing the domains of variables.

Analogously, if

\Lct{S} - \Est{S} < \Dur{S}+ \Dur{T}

and

\Lct{S} - \Est{T} <  \Dur{S}+ \Dur{T}

holds, T must be last.

edge-finding

Similar rules can be formulated if T is contained in S. For this kind of reasoning, the term edge-finding was coined in [AC91]. There are several variations of this idea in [CP89], [AC91], [CP94], [MS96] for the Operations Research community and in [Nui94], [CL94], [BLN95], [Wür96] for the constraint programming community; they differ in the amount of propagation and which sets S are considered for edge-finding. The resulting propagators do a lot of propagation, but are also more expensive than e.g. reified constraints. Depending on the problem, one has to choose an appropriate propagator.

For unary resources Oz provides two propagators employing edge-finding to implement capacity constraints. The propagator Schedule.serialize is an improved version of an algorithm described in [MS96]. A single propagation step has complexity O(n^2) where n is the number of tasks the propagator is reasoning on, i. e. the number of tasks on the resource considered by the propagator. Because the propagator runs until propagation reaches a fixed-point, we have the overall complexity of O(k \cdot n^3) when k is the size of the largest domain of a task's start time (at most k \cdot n values can be deleted from the domains of task variables).

The propagator Schedule.taskIntervals provides weaker propagation than described in [CL94] but provides stronger propagation than Schedule.serialize. While a single propagation step has complexity O(n^3), the overall complexity is O(k \cdot n^4).

Now we can solve the bridge construction problem with a propagator using edge-finding. By the statement

{ExploreBest {Compile Bridge  
                      Schedule.serialized
                      DistributeSorted}  
             Earlier}

we compute the optimal solution in a full search tree with 508 choice nodes instead of 1268 as in the section before.

proof of optimality

The improvement by strong propagation becomes even more dramatic if we constrain the bridge problem further by stating that the makespan must be strictly smaller than 104. Since we know that 104 is the optimal solution we, thus, prove optimality of this makespan. The modified problem specification is

OptBridge = {AdjoinAt Bridge constraints
             proc {$ Start Dur}
                {Bridge.constraints Start Dur}
                Start.pe <: 104
             end}

Solving the modified problem with the simple propagator by

{ExploreBest {Compile OptBridge  
                      Schedule.serializedDisj
                      DistributeSorted}  
             Earlier}

we obtain a search tree with 342 choice nodes. Using the edge-finding propagator Schedule.serialized instead we obtain a search tree with only 22 choice nodes. By using Schedule.taskIntervals the search tree shrinks further to the size of 17 choice nodes.

Note that for the proof of optimality the domains of the start times are rather narrow. If we start with an unconstrained problem, the domains are rather wide. But if the domains are more narrow compared to the durations of the tasks, the conditions we have described above are more likely to become true and propagation may take place. This is the reason why edge-finding turns out to be a stronger improvement for the proof of optimality.


Christian Schulte and Gert Smolka
Version 1.4.0 (20080702)