One of the fundamental concerns of logic is the study of proofs: how can we convince ourselves in a precise and mechanical way of the truth of some claim. The desire to do so resulted from the need to distinguish between valid and invalid mathematical proofs, expressed using prose. Sometimes, hidden assumptions were not recognized, leading to faulty proofs. For example, consider the first axiomatic proof system, Euclidean geomerty, studied in middle school. It has five axioms, including the famous "parallel postulate". It is an amazing achievement, which is still an excellent example of how to reason formally, and was the source of many important advances in mathematics. But Euclidean geometry has many hidden assumptions, which are glossed over by the use of informal drawings. About 2200 years after Euclid, several mathematicians tried to completely formalize geometrical proofs. David Hilbert's axiomatization of geometry includes twenty axioms instead of Euclid's five. One of these states that between every two different points on a straight line there is a third point on the same line. This was so obvious to Euclid that he didn't even think to mention it in his list of axioms. But axioms are exactly those self-evident propositions on which the theory is based, and the proof isn't logically sound unless all these are stated explicitly.

Logic is closely tied with computation, although the connection was made explicit only towards the 20th century. A formal proof is a series of deductions, each of which is based on axioms or previously-proven propositions, using a set of deduction rules. It is supposed to be so simple that it can easily be checked for correctness, without any understanding of its subject matter, whether it is geometry, number theory, or abstract algebra. In other words, it should be easy to write a program that checks a formal proof; all you need to do is verify that each step follows from the axioms and from previous steps according to the rules.

But wait! Is there a limit to the complexity of the axioms or of the rules used in the proof? Without such limits, the whole exercise falls apart. For example, imagine a system of axioms that contains all true statements in number theory. This is, of course, an infinite set. How difficult is it to write a program to check whether a given statement is an axiom? Well, it turns out to be impossible! (This is a consequence of Gödel's first incompleteness theorem.) So, if we are to be able to write a program to check formal proofs, we need to restrict the possible sets of axioms. For example, in Gödel's theorems, the set of axioms is required to be recursively enumerable; this means that there is a (finite) program that can generate all axioms and never generates anything that is not an axiom. Of course, this program will never terminate if the set is infinite. That's acceptable, as long as the program is guaranteed to produce every axiom if you wait long enough.

So now, a concept about computability has entered the study of logic itself. It is only fair, then, to apply logic to computation; and, indeed, the study of logics of programs is an important field in computer science. There are even computer programs called

*theorem provers*, which have been used to prove difficult mathematical theorems as well as to verify the correctness of other programs. However, this has still not affected the day-to-day work of most software developers. In the next post, I will discuss why this is the case and what formal tools can still be used for software development today.

## No comments:

## Post a Comment