11.4 Strong Serializers

So far we only have considered serializers which result in a search tree which depth grows quadratically in the number of tasks on a resource. In this section we will introduce a serializer where the depth of a search tree grows only linear in the number of tasks. In each choice node we will order several tasks not only two tasks. each choice node but several of them. This is done by stating that a single task must precede all other tasks on a resource.

A further disadvantage of the bottleneck serializer considered in Section 11.3 is its static bottleneck criterion. Instead we take the changing size of domains during run time into account to select a resource which should be serialized. This is also the approach chosen for the first-fail distribution strategy.


We start with a better criterion to select a resource. Let S be the set of tasks on a resource r. The available time to schedule all the tasks in S is \Lct{S}-\Est{S}. This value is called the supply of r. The overall time needed to schedule the tasks in S is \Dur{S}.

demand, global slack

This value is called the demand of r. The difference between supply and demand is called global slack of r ({\it slack}^r_g) and denotes the free space on the resource. The smaller the value of the global slack the more critical is the resource (the tasks on it can be shifted only in a very limited way).

Hence, one could use the global slack as a criterion to select a resource. But a small example reveals that this criterion has still an important flaw: it is too coarse-grained. Assume S to be the set \{A,B,C\} described in the following table.













The global slack is \Lct{S} - \Est{S} - \Dur{S} = 20 - 0 - 11 =
9. But now consider the set S'=\{B,C\}. We obtain \Lct{S'} - \Est{S'} - \Dur{S'} = 13 - 1 - 9 = 3. This means that for B and C we have far less free place to shift the tasks than it is indicated by the global slack. Thus, we refine our criterion as follows.

task interval

Let T_1 and T_2 be two tasks on the same resource r and S the set of all tasks running on r. If \Est{T_1} \leq \Est{T_2} and \Lct{T_1} \leq \Lct{T_2}, we call the set I(T_1,T_2) = \{ T\ |\ T \in S,\ \Est{T_1} \leq \Est{T},\ \Lct{T} \leq
\Lct{T_2}\} the task interval defined by T_1 and T_2 (see also [CL94]). Intuitively, a task interval is the set of tasks which must be scheduled between \Est{T_1} and \Lct{T_2}. Let I_r be the set of all task intervals on the resource r.

local slack

The local slack of r ({\it slack}^r_l) is now defined as

\min(\{\Lct{I}-\Est{I}-\Dur{I}|I \in I_r\})


critical resource

If two resources have the same local slack, we use the global slack to break this tie. Thus, we select the resource next for serialization which is minimal according to the lexicographic order ({\it
slack}^r_l,{\it slack}^r_g). The selected resource is called the critical resource. Note that a local slack of a resource with n tasks can be computed in O(n^3) time.

Next we will determine the constraints to distribute with. Let u(r) be the set of tasks on the critical resource r which are not ordered with all other tasks on r yet. Using the ideas of edge-finding we compute the set F of all tasks in u(r) which can be scheduled first:  F = \{T \ |\ T \in u(r), \Lct{u(r) \backslash
\{T\}} - \Est{T} \geq \Dur{u(r)}\}. In a distribution step each of the tasks in F, say T, may be scheduled before all others and T can be deleted from u(r). The task in F which is smallest according to the lexicographic order (\Est{T},\Lst{T}) is first selected for distribution. By this choice we leave as much space for the remaining tasks to be scheduled on the resource. We now distribute with the constraints that T precedes all other tasks in u(r): \forall T' \in u(r)\backslash \{T\}:\ T + \Dur{T} \leq T'. If this choice leads to failure, the next task in F is tried according to our criterion.

The overall strategy is as follows. We select a critical resource according to our criterion developed above. Then we serialize the critical resource by successively selecting tasks to be scheduled before all others. After the critical resource is serialized, the next critical resource is selected for serialization. This process is repeated until all resources are serialized.

The described serializer follows the ideas of [BLN95] which in turn adopts ideas of [CP89]. The serializer is available through Schedule.firstsDist.

We immediately apply our new serializer to the bridge problem.

{ExploreBest {Compile Bridge  

The optimal solution can be found and its optimality can be proven with only 90 choice nodes. Now the proof of optimality (problem OptBridge) needs only 22 choice nodes.

But we can do better. In addition to the set F of tasks we can compute the set L of tasks which may be scheduled after all other tasks (see also Section 11.3). In this case the task T which is tried first to be scheduled after all the others is the one which is maximal according to the lexicographic order (\Lct{T},\Ect{T}). A further serializer computes both F and L. Then it selects the set which has the smallest cardinality. This serializer is available through Schedule.firstsLastsDist.

Using Schedule.firstsLastsDist we can find the optimal solution and prove its optimality with only 30 choice nodes (see Figure 11.17). Note that in contrast to Figure 11.15 where we have needed 8 solutions to reach the optimal one, we now find the optimal solution immediately. The size of the search tree is reduced by more than an order of magnitude.

Figure 11.17: A search tree for the bridge problem.

The optimality of the problem can be proven with only 4 choice nodes.

Let m be the number of resources to consider in a scheduling problem and let n be the maximal number of tasks on a common resource. Then the described serializer has a run time complexity of O(m \cdot n^3) if a resource has to be selected and O(n) if only the set F or L has to be computed.


Because this kind of serializer successively serializes all resources, we call it resource-oriented serializer.

Christian Schulte and Gert Smolka
Version 1.4.0 (20080702)