At the simplest level, it is possible to think of a program as being executed sequentially, instruction by instruction. This is almost never true; even the simplest processors typically interrupt the sequential flow to handle external events, such as pressing a key on a keyboard or control panel connected to the computer. However, if we trust the operating system to make a clean separation between such interrupt handlers and the running application, then it is possible to think at a higher level of abstraction of some applications as running sequentially. But many application are concurrent by nature, and these are the most difficult to get right. There are many examples of small programs with two parallel components that have subtle and hard-to-find bugs in them, and parallel or concurrent systems are notoriously hard to program.

Even the simple sequential model requires special mathematical treatment, because it has to represent the changing state of the computer. Every operation in a program changes the state of the machine in some way; otherwise, there would be no point in executing it. There are many types of changes: first, the location of the next instruction (the "program counter") changes, either to the next instruction in the sequence, or to a different place for instructions that change the flow of control. Some instructions change the contents of memory, registers, or various control bits (for example, one that records whether the previous comparison resulted in a zero value). High-level languages hide some of these details, but most (with the notable exception of functional and logic languages) still have the notion of changing state.

A logical model of programs with changing state must represent this state explicitly. If x is a variable in a program, the formula x=3 is meaningless with respect to a program, because it may true or false in different states of the computation of the same program. The logical model must have an explicit representation of states, and some syntactic way of referring to different states. One of the best known formalisms is called

*Hoare logic*, after C. A. R. Hoare, Turing prize winner and famous, among other things, as the inventor of the quicksort algorithm.

Every formula in this logic is a

*Hoare triple*of the form {P}A{Q}, where A is a (possibly partial) program and P and Q are logical statements about the variables of the program. This formula asserts that if P holds in the state just prior to the computation of A, and if this computation terminates, then Q will be true in the resulting state. For example, a (somewhat simplified) way of describing the effect of an assignment x := e in a program is by the formula {e = c} x := e {x = c}, where c is a constant. (Note the crucial difference between logical equality and assignment, so unfortunately blurred by languages from Fortran to Java that use "=" for assignment.) This formula asserts that if the value of the expression e in the state preceding the assignment is c, than the value of x after the assignment is also c.

As another example, consider how to prove that a certain formula holds for a conditional: {P} if C then A else B {Q}. If the condition C is true in the state before the computation, then the conditional will choose the subprogram A. In order for the desired formula to hold, it must be the case that if P and C are true before the computation, Q must be true afterwards (provided, as always, that the computation terminates). Formally, {P

**and**C} A {Q} must be true. Similarly, {P

**and not**C} B {Q} must hold to guarantee the desired result in the case where the condition C is initially false. And indeed, the desired formula follows from the latter two in Hoare logic.

As is the case for most logics of programs, the difficulty lies in reasoning about loops (and recursion). In order to prove a property of a loop using these logics, it is necessary to come up with a

*loop invariant*. This is a property that always holds at the beginning of each iteration. In order to prove a property P following the loop, you prove that the invariant holds on entry to the loop, that it sustains itself across one iteration, and that if you exit the loop with the invariant holding, P must be true at that point.

This is very similar to a proof by induction; the first step is the base of the induction, and the second is the inductive step, in which you prove that if the invariant holds in the beginning of one iteration, it also holds on the beginning of the next one. As is the case with inductive proofs, coming up with a suitable invariant is difficult. Much of the work of automatic theorem provers (for mathematics as well as for logics of programs) is discovering the induction hypotheses or, equivalently, the loop invariants. (In fact, in one of my favorite theorem provers, ACL2, these are exactly the same.)

This kind of reasoning is very low-level, and it is impossible to use it to prove the correctness of a large program manually. Automated theorem provers are a great help, but the process is still difficult and time-consuming, and requires high skills in logic and the specific theorem prover. It is therefore limited to relatively small but safety-critical programs. Wide application of automatic theorem proving to large-scale programs seems to be far in the future. As a result, there is a chasm between the people developing logics of programs and associated theorem provers, and the general population of software developers, most of whom are not even aware of the existence of these methods and tools. But there is a middle way; I will discuss it in the next post.

## No comments:

## Post a Comment