7 Concurrency For Free

This part of the tutorial addresses the following theme: what happens to programming when support for concurrency is extremely cheap, economical, and efficient. Suddenly, an entirely different style of programming and design is made possible. We are going to explore and exploit this new freedom.

Oz has very efficient, very economical, very lightweight threads, with fair preemptive scheduling. We don't mean that Oz threads are just somewhat better than brand X; we mean that brand X can't even see our dust with a telescope, er... well, just about anyway! In order to assuage the skeptics, we first exhibit a program that demonstrates massive concurrency and exercises the worst case. Doubters are encouraged to throw that program at their favorite programming language... and watch it die, eventually. Meanwhile, you could mount a clay tablet device, and engage in the more rewarding exercise of installing Windows from sumerian backup.

The program is invoked with:

death --threads N --times M

and creates N threads. Each thread does nothing but yield immediately. Normally we would let the preemptive scheduler take care of interrupting a thread to switch to a new one, but here, in order to exercise the worst case, as soon as a thread is allowed to run, it explicitly yields. Thus the program does little else but switch between threads. Each thread yields M times and then terminates. When all threads have terminated, the program also terminates.

I just tried the following:

death --threads 10000 --times 10

In other words, 10000 threads are created and must each yield 10 times. This results in 100000 thread switches. It takes 3s on this little steam-driven laptop. I have run the same program on a real computer at the lab but using:

death --threads 100000 --times 10

It takes 7.5s. There are 100000 threads at all time runnable, and they must perform 1000000 thread switches. Try creating 100000 threads in Java... really, go ahead, I insist! I promise not to laugh!

Just so you don't have to take my word for it, I coded the same program in Java and tried:

java Death 1000 10

This takes 2:40mn!

What was the point of this exercise? It was not prove that Oz is better than Java; in this respect the test above was deliberately unfair: Java was never intended to support designs with massive concurrency... and that is the point. Oz was from the start designed as a platform for concurrent computation. That concurrency is so cheap and efficient makes entirely new designs possible that would not be realistic or even conceivable in other languages. Whenever you need to perform an operation asynchronously you simply spawn a new thread. You can design your application as a collection of concurrent objects or agents, etc...

Death by Concurrency in Oz

Here is the code of the Oz application used in the benchmark above:

functor 
import Application
define 
   proc {Yield} {Thread.preempt {Thread.this}} end 
   proc {Run N}
      {Yield}
      if N>then {Run N-1} end 
   end 
   Args = {Application.getCmdArgs
           record(threads(single type:int optional:false)
                  times(  single type:int optional:false))}
   proc {Main N AllDone}
      if N==then AllDone=unit else RestDone in 
         thread {Run Args.times} AllDone=RestDone end 
         {Main N-1 RestDone}
      end 
   end 
   {Wait {Main Args.threads}}
   {Application.exit 0}
end 

Death by Concurrency in Java

Here is a very similar program, in Java:

import java.lang.*;
class MiniThread extends Thread {
    int n;
    MiniThread(int m) { n=m; }
    public void run() {
        do { yield(); n--; } while (n>0);
    }
}
public class Death {
    public static void main(String[] argv) {
        int threads = Integer.parseInt(argv[0]);
        int times   = Integer.parseInt(argv[1]);
        for(int i=threads;i>0;i--) {
            MiniThread t = new MiniThread(i);
            t.start();
        }
    }
}


Denys Duchier, Leif Kornstaedt and Christian Schulte
Version 1.4.0 (20080702)