The deadline for the Pitt-CMU conference [edit: has been extended to 12/15. Please submit!]
Tuesday, November 25, 2008
I just found out that Yiannis Moschovakis's Elementary Induction on Abstract Structures was released as a cheap Dover paperback over the summer. It was previously only available in the horrendously expensive yellow hardback series by... North-Holland, according to Amazon. The secondary literature on the revision theory of truth has recently nudged me into looking at this book, and it is nice to know that it is available at a grad-student-friendly price. Philosophical content to follow soon.
Posted by Shawn at 1:04 PM
Monday, November 24, 2008
I just finished reading Badesa's Birth of Model Theory. It places Löwenheim's proof of his major result in its historical setting and defends what is, according to the author, a new interpretation of it. This book was interesting on a few levels. First, it placed Löwenheim in the algebraic tradition of logic. Part of what this involved was spending a chapter elaborating the logical and methodological views of major figures in that tradition, Boole, Schröder, and Peirce. Badesa says that this tradition in logic hasn't received much attention from philosophers and historians. There is a book, From Peirce to Skolem, that investigates it more and that I want to read. I don't have much to say about the views of each of those logicians, but it does seem like there is something distinctive about the algebraic tradition in logic. I don't have a pithy way of putting it though, which kind of bugs me. Looking at Dunn's book on the technical details of the topic confirms it. From Badesa, it seems that none of the early algebraic logicians saw a distinction between syntax and semantics, i.e. between a formal language and its interpretation, nor much of a need for one. Not seeing the distinction was apparently the norm and it was really with Löwenheim's proof that the distinction came to the forefront in logic. A large part of the book is attempting to make Löwenheim's proof clearer by trying to separate the syntactic and semantic elements of the proof.
The second interesting thing is how much better modern notation is than what Löwenheim and his contemporaries were using. I'm biased of course, but they wrote ax,y for what we'd write A(x,y). That isn't terrible, but for various reasons sometimes the subscripts on the 'a' would have superscripts and such. That quickly becomes horrendous.
The third interesting thing is it made clear how murky some of the key ideas of modern logic were in the early part of the 20th century. Richard Zach gave a talk at CMU recently about how work on the decision problem cleared up (or possibly helped isolate, I'm not sure where the discussion ended up on that) several key semantic concepts. Löwenheim apparently focused on the first-order fragment of logic as important. As mentioned, his work made important the distinction between syntax and semantics. Badesa made some further claims about how Löwenheim gave the first proof that involved explicit recursion, or some such. I was a little less clear on that, although it seems rather important. Seeing Gödel's remarks, quoted near the end of the book in footnotes, on the importance of Skolem's work following Löwenheim's was especially interesting. Badesa's conclusion was that one of Gödel's big contributions to logic was bringing extreme clarity to the notions involved in the completeness proof of his dissertation.
I'm not sure the book as a whole is worth reading though. I hadn't read Löwenheim's original paper or any of the commentaries on it, which a lot of the book was directed against. The first two chapters were really interesting and there are sections of the later chapters that are good in isolation, mainly where Badesa is commenting on sundry interesting features of the proof or his reconstruction. These are usually set off in separate numbered sections. I expect the book is much more engaging if you are familiar with the commentaries on Löwenheim's paper or are working in the history of logic. That said, there are parts of it that are quite neat. Jeremy Avigad has a review on his website that sums things up pretty well also.
Sunday, November 16, 2008
While I am coming to the pleas for reprints late, it occurred to me that it would be very nice to have a Dover reprint of the two volumes of Entailment. Of course, I wouldn't complain if Princeton UP issued a cheap paperback version. They are out of print and are individually prohibitively expensive. There also are not enough copies of volume 2 floating around. It can be hard to get one's hands on volume 2 around here, which is unfortunate since I've lately needed to look at it.
[Edit: Looking at the comments on the thread I linked to, it also strikes me that it'd be nice to have a volume on the theme of logical inferentialism. It would have reprints of Gentzen's main papers, some of Prawitz's stuff, Prior's tonk article, Belnap's reply and his display logic paper, an appropriate smattering of stuff from Dummett and Martin-Loef, possibly some of the technical work done by Schroeder-Heister, Kremer's philosophical papers on the topic, Hacking's piece, and some of Read's and Restall's papers. I'm sure there are others that could go into it, although I think what I've listed would already push it into the two volume range. Dare to dream...]
Posted by Shawn at 9:00 AM
Friday, November 14, 2008
There's what appears to be a nice article on combinatory logic up at the Stanford Encyclopedia, authored by Katalin Bimbo. The article briefly mentions Meyer's work on combinators, and it talks about the connection between combinators and non-classical logics. However, it doesn't seem to make explicit the connection between combinators and structural rules in a sequent calculus, which Meyer calls the key to the universe.
Bimbo's website notes that she recently wrote a book with Dunn on generalized galois logics, which looks like it extends the last two chapters of Dunn's algebraic logic text. I'd like to get my hands on that. Time to make a request to the library...
Posted by Shawn at 5:05 PM
Saturday, November 08, 2008
In chapter 3 of the Revision Theory of Truth (RTT), Gupta and Belnap argue against the fixed point theory of truth. They say that fixed points can’t represent truth, generally, because languages with fixed point models are expressively incomplete. This means that there are truth-functions in a logical scheme, say a strong Kleene language, that can not be expressed in the language on pain of eliminating fixed points in the Kripke construction. An example of this is the Lukasiewicz biconditional. Another example is the exclusion negation. The exclusion negation of A, ¬A , is false when A is true, and true otherwise. The Lukasiewicz biconditional,A≡B , is true when A and B agree on truth value, false when they differ classically, and n otherwise. Read more
The shape of this argument seems to be the following. The languages we use appear to express all these constructions. If they don’t, we can surely add them or start using them. The descriptive problem of truth demands that our theories of truth work in languages that are as expressive. (Briefly, the descriptive problem is the problem of characterizing the behavior of truth as a concept we use, giving patterns of reasoning that are acceptable and such.) The fixed point theories prevent this, therefore they cannot be adequate theories of truth.
I’m not sure how forceful this argument is. I’m also not quite sure how damaging expressive incompleteness is. The expressive incompleteness at issue in the argument is truth-functional expressive incompleteness. There is lots of expressive incompleteness present in the languages under consideration. This is distinct from the language expressing its own semantics. The semantic concepts required for that need not be present since there will presumably be non-truth-functional notions used. It also isn’t part of the claim with respect to the languages we use. I will stop to discuss this point for a moment since I find it interesting.
The languages we use may or may not be able to express their own semantics. As Gupta says, rightly I think, one should be suspicious of anyone who claims that we must be able to express our semantic theories in the languages they are theories for. The primary reason for this is that we don’t know what a semantic theory for the complete language would be. The extant semantic theories we have work for small fragments that are regimented highly. Further, these theories are only defined on static languages, whereas the ones we use appear to be extensible. Additionally, these theories tend to be coupled to a syntactic theory that provides the structure of sentences on which the semantics recurs. There is no such syntactic theory for the languages we use either. The shape a semantic theory for our used language might be very different than the smaller models currently studied. It might not even contain truth. The requirement that a language be able to express its own semantic theory seems to stem from an idealization based on current semantic theories that, if the above is right, is illicit. The question of expressive completeness is distinct from this question of semantics. The question of what it is to give a semantics for a language in that language is interesting, and is raised in criticisms by both McGee and Martin. I hope to post on that soon.
One question that strikes me is how central to the descriptive problem is this expressive power? Expressive power itself is a notion that is somewhat obscure until one moves to a formal context in which one can tease apart distinctions. For example, it isn’t at all apparent that ‘until’ isn’t expressible using the standard tense operators or constructions, even though all these are, arguably, readily apparent in the languages we use. It isn’t clear, then, that the notion of expressibility is even workable until we move to a more theoretical setting from the less theoretic setting of language in use.
If we move to a more theoretical setting and discover that what we thought was vast expressive power has to be curtailed, then it isn’t clear that our earlier intuition is what must be preserved. One could hold out for a theory of truth that preserved it. Gupta clearly thinks this is one to hold on to. Perhaps this is what a detailed statement of the descriptive problem demands.
Something else that I wonder about this line of thought is how common expressive incompleteness, of the truth-functional kind, is among the most prominent logical systems. We have it in the classical case. In limited circumstances, we have it even with the addition of the T-predicate. In any case, we probably don’t want to stop with just logics that treat only truth-functions and T-predicates. We might want to add modal operators of some kind, and these are not truth-functional. What sort of expressive problems are generated, or not, then? I'm not sure, although there is an excellent chapter in RTT on comparing the expressive power of necessity as a predicate and as a sentence operator.