Arbitrary interleaving
We have to justify the use of arbitrary interleavings in the abstraction. What this means, in effect, is that we ignore time in our analysis of concurrent programs. For example, the hardware of our system may be such that an interrupt can occur only once every millisecond. Therefore, we are tempted to assume that several thousand statements are executed from a single process before any statements are executed from another. Instead, we are going to assume that after the execution of any statement, the next statement may come from any process. What is the justification for this abstraction?
The abstraction of arbitrary interleaving makes concurrent programs amenable to formal analysis, and as we shall see, formal analysis is necessary to ensure the correctness of concurrent programs. Arbitrary interleaving ensures that we only have to deal with finite or countable sequences of statements \( a_1,a_2,a_3, \dots,\) and need not analyze the actual time intervals between the statements. The only relation between the statements is that a; precedes or follows (or immediately precedes or follows) \(a_j\). Remember that we did not specify what the atomic statements are, so you can choose the atomic statements to be as coarse-grained or as fine-grained as you wish. You can initially write an algorithm and prove its correctness under the assumption that each function call is atomic, and then refine the algorithm to assume only that each statement is atomic.
The second reason for using the arbitrary interleaving abstraction is that it enables us to build systems that are robust to modification of their hardware and software. Systems are always being upgraded with faster components and faster algorithms. If the correctness of a concurrent program depended on assumptions about time of execution, every modification to the hardware or software would require that the system be rechecked for correctness (see [62] for an example). For example, suppose that an operating system had been proved correct under the assumption that characters are being typed in at no more than 10 characters per terminal per second. That is a conservative assumption for a human typist, but it would become invalidated if the input were changed to come from a communications channel.
The third reason is that it is difficult, if not impossible, to precisely repeat the execution
of a concurrent program. This is certainly true in the case of systems that
accept input from humans, but even in a fully automated system, there will always
be some jitter, that is some unevenness in the timing of events. A concurrent
program cannot be debugged
in the familiar sense of diagnosing a problem,
correcting the source code, recompiling and rerunning the program to check if the bug
still exists. Rerunning the program may just cause it to execute a different scenario
than the one where the bug occurred. The solution is to develop programming and
verification techniques that ensure that a program is correct under all interleavings.