Thursday, January 08, 2009

More notes on the Search

In my previous post on Search for Mathematical Roots, I mentioned that one of the things that Frege complained about, according to Grattan-Guinness, was the overloading of symbols. I want to expand on that briefly in this post. I said that the same thing was done in proof theory. It seems that this is not completely accurate. The situation, from what I can tell from the Search for Mathematical Roots, was that often logicians, especially in the algebraic tradition, would use one symbol to designate many different things. For example, some would use '1' to designate the universe of discourse as well as the truth-value true. The equals sign '=' would do double duty as identity among elements as well as equivalence between propositions. It would also get used to indicate a definition. This would not be so bad, but often there would not be any clear way to tell what sense a sign would be used in, sometimes appearing in one sentence in multiple ways. Things get hairier when quantifiers are added, since the variables quantified over would sometimes be propositions and sometimes not. Read more


How is this different than is the case in proof theory? In proof theory symbols are often overloaded in the sense that, in a sequent calculus, the symbol on the left means something different than on the right. Two examples are the comma, behaving conjunctively on the left and disjunctively on the right, and the empty sequence, acting as the true on the left and the false on the right. The difference between proof theory and the situation in the 19th century is that the overloading in proof theory is systematic. One can easily figure out what is going on based on its context. Not so with the algebraists. Grattan-Guinness, and Badesa, indicate different points in proofs in which a symbol slides from, say, a propositional interpretation to one that doesn't have a clear propositional meaning. Often times, it seems, the logicians themselves did not make the distinctions and did not notice that they were slipping between the distinct senses. The overloading in proof theory is benign and useful whereas in the case of the 19th century algebraists it was confusing and sometimes hampered understanding.

Other examples are the signs for membership and set-inclusion, also things Frege harped on. Some logicians used membership and inclusion interchangeably. One reason Grattan-Guinness gives is that they were taking membership to be along the lines of the part-whole relationship of mereology, though he didn't use the word "mereology." While for some, this slip was due to unclarity about the concepts involved, this particular instance of overloading wasn't seen as bad. Peano apparently was guilty of it, and Quine, in an article on Peano's contributions to logic, said that he was right do so. One gets an interesting set theory, one Quine liked, if one doesn't distinguish an element from its singleton, and so treating membership the same as single element inclusion.

3 comments:

ansten said...

Sundholm (and perhaps others) has remarked that the formulas in standard presentations of predicate logic serve (at least) three different roles: as standing for propositions, for propositions assumed, as well as for propositions demonstrated. Consider for instance a proof in a system of natural deduction of A->B and of A (these are proved, say, to get a proof of B). In such a proof, A will typically serve all the three roles: first, as assumption in order to derive A->B, but also as theorem in the derived A, and finally as proposition in the derived A->B.
I believe signs for assumptions are often marked in natural deduction, but there is no distinciton there between the signs of propositions and of theorems.
If one believes that the distinctions in questions are theoretically important, proof theory can be accused of an unfortunate overloading of symbols.
The same confusion of symbols does not arise in constructive type theory, where one distinguishes the judgement A true from the proposition A; it also has a notation for assumption, namely (A true).

Shawn said...

Ansten,
That is a good point about the distinctions in natural deduction. I think it diverges some from the unfortunate overloading in early algebraic logic in the following sense. In the algebraic logic case, there was a shift between types involved in the overloading. In the first example it was between objects and propositions. The rules in natural deduction make sense when applied to theorems, assumptions, and propositions. Some of results of operations in the algebraic case do not have happy propositional interpretations.

That being said, there is something I like about the distinction you point to. In the tree formulation we don't have a local indication of a theorem. There would seem to be a global one since we can look to see if there are undischarged assumptions. In Fitch notation, theorems would be marked by the absence of assumption lines to the left of the proposition, although there would still be some ambiguity between assumptions and theorems reiterated within the scope of other assumptions.

I've read a little bit on constructive type theory, but I'm still a little shaky as to what is going on in the judgment case there. How does 'A true' function, as opposed to 'A'? Is the latter just barred?

ansten said...

Yes, I certainly think the distinction is important for logic. For the mathematical proof theorist it may not matter much, but for the philosopher that wants to use proof theory in theorizing I think it is important to at least be aware of the distinctions.

The A here is a type (or set, or sort, etc.), but it can be seen in different ways, in particular it can be seen as a proposition -- this is a version of the formulas-as-types interpretation of Curry and Howard.
As you may know, the notion of judgement plays a fundamental role in constructive type theory, and is one of the features that distinguishes it from most other contemporary systems of logic. In the type theory itself the notion of judgement is taken to be primitive, although it is of course open for philosophical elucidation. The notion of proposition is not primitive but is explained by saying what one must know in order to make the judgement A prop. One can make this judgement if one knows how a canonical proof-object is formed as well as how two equal canonical proof-objects are formed. A canonical proof-object is the canonical element of a type under the Curry-Howard isomorphism. What a canonical element is will therefore depend on the proposition (or type) in question -- it is by specifying these, together with specifying when two canonical elements are equal, that one explains a proposition (or defines a type). I would say, for instance, that a bird's sitting oustide my window would be a canonical proof-object for the proposition that a bird is outside my window.

It is perhaps better that I refer to Martin-Löf's book, where these issues are treated in the first few sections.

I don't think the method you mention of seeing whether a formula appearing in a natural deduction derivation counts as a theorem catches the right point. Assuming that we already have a notation for signalling theorems, what your method does is rather to distinguish "categorical" from "hypothetical" theorems, where the latter type of theorems are those asserted under an assumption.
The \vdash in Hilbert-type systems is usually viewed as a theorem-indicator, but in this case, theorem is a metamathematical notion, meaning a formula occurring as the last element of a finite string of formulas possessing certain syntactic
properties.
Of course, the judgement-stroke in Frege's ideography is a "theorem-indicator".

How is the style of The Search for Mathematical Roots? I have only read one piece by Grattan-Guiness, and that I found terribly dry, it being only a listing of "historical facts" without any interpretation at all...