Wednesday, April 30, 2008

I declare victory over year two

Yesterday I finished up enough stuff to call my second year of philosophy grad school done. I gave the final presentation for my directed reading with Anil Gupta. It was on the connection between Dunn's gaggle theory and Belnap's display logic, as presented in some articles by Rajeev Gore and Greg Restall. It is neat stuff and I hope to put together a post on it. (If wishes were fishes...) Belnap came to my presentation, which included an ever so brief introduction to display logic. That was cool, although a bit nerve-wracking even though both members of my audience were incredibly nice.

In any case, finishing that presentation brought me up to quota for classes I need to have done by the fall and my teaching obligations have been completely discharged. Woohoo! I still have some things to do over the summer: finish some outstanding model theory proofs and write a paper for my Carnap and Quine class. Neither of these are likely to be accomplished in the near future as I leave on Friday for a three week vacation in Japan. I don't expect to be posting much while I'm overseas. Once I get back it looks like there will be some reading groups happening in Pittsburgh. The reading list is TBA. I feel like I should have some thoughts on teaching, but I don't have anything put together. As I approach time when I have to start thinking about putting together a prospectus (or, spinning in the void, as I affectionately call it), I find myself drawn more to philosophy of logic. This is good since there are a lot of people around with which to talk about that. This is (possibly) bad since at the moment I know very little about the area. It just seems like a possible way of putting together some of my interests. I'm looking forward to reading a lot this summer since I didn't do overly much of that during the year. That and having a year off from teaching. (A complete non-sequitur: I just discovered that my spellchecker doesn't recognize "woohoo" as a word. How odd.)

Saturday, April 26, 2008

Logical and structural

One of the key components of display logic is the distinction between structural connectives and logical connectives. It is an important distinction, but I don't think I have a good way of describing it. This makes me uncomfortable.

The structural connectives are connectives which take the place of structural rules. The display property, isolating a certain structural part in a way based on its polarity, only needs structural rules. In isolating the structural part, the polarity is preserved by the structural rules. The logical connectives have left- and right-introduction rules that involve no structure on that side. Different versions of a single connective, say →, have the same intro rules modulo differences in structure.

What are the differences between the two sorts of connective? The structural connectives are usually overloaded while the logical connectives are not. That is to say that a structural connective like º will behave differently depending on the context. It will act conjunctively when in an antecedent context but disjunctively when in a succedent one. Logical connectives will not change their behavior depending on context. Although, Restall claims (there wasn't a proof in of this in the article I read but it seems straightforward) that one can provide disambiguated structural connectives that are not overloaded and so do not change depending on context. This involves creating two versions of the structural connective, one for antecedent contexts and one for succedent ones. Overloading won't let us distinguish the two then.

Sometimes the logical connectives mimic structure, to use Restall's phrase. An example of this is conjunction mimicking º in antecedent context. This notion of mimicking can be made more precise, as Restall does in his "Display Logic and Gaggle Theory." The basic idea is that a logical connective f mimics a structural connective s iff two sorts of rules are admissible ('A' denotes formulae, 'X' structures): the rule, from X ⇒ s(A1,...,An) to X ⇒ f(A1,...,An), and the rule, from proofs of Xi ⇒ Ai (or Ai ⇒ Xi, depending on the polarities of the connectives involved) for 1≤i≤n to f(A1,...,An) ⇒ s(A1,...,An). Some logical connectives don't mimic structure. For example, [] may not, depending on the conditions placed on it.

Formulae contain only atomic formulae and logical connectives, whereas as structures are built out of formulae and structural connectives. Cut can be used only on formulae, not structures.

This gives a bit of a feel for the difference between the two, but it doesn't seem like a succinct enough explanation. They have different inductive definitions, one that builds things up using structural connectives and one that builds things up using logical connectives. But, given that some of the logical connectives mimic the structural connectives, this doesn't seem like the best route for explaining their difference. Maybe what I am after is a more philosophical explanation of the difference. Are the structural connectives logical in a way that makes them of the same kind as the more standardly logical connectives? Are º and classical conjunction members of the same kind of philosophically important kind?

Thursday, April 24, 2008

Excellent advice

Some excellent advice I got from my model theory teacher: "If you've proven the inconsistency of mathematics, you should go back and check your proof."

Wednesday, April 23, 2008

Display logic

I'm working my way through some papers by Rajeev Gore and Greg Restall, connecting some algebraic logic stuff, gaggles, to some proof theory stuff, display logic. A post on gaggles is forthcoming. I need to figure out something to say first. That won't stop me with display logic though. I'll shoot for a little bit of exposition. A little rough exposition.

Display logic is a sequent proof system for various logics. The innovation in it is the introduction of structure connectives to augment the logical connectives. Two of the main ones in Belnap's original presentation were º and ∗. The former is an associative, binary structure connective that acts sort of like conjunction and the latter is a unary structure connective that acts sort of like negation. Together with these, one distinguishes two sorts of context, antecedent and consequent. Antecedent context is to the left of the turnstile and consequent context is to the right of the turnstile. Being within the scope of an odd number of ∗s reverses the polarity, to use a leading term. The º becomes a conjunction in antecedent position and a disjunction in consequent position. The ∗ becomes a negation in either. (I say "becomes" but this is spelled out more in the introduction rules for the connectives.)There are some basic rules relating structures that allow one to prove some structures to be display equivalent. For example, X ⇒ Y and X∗∗ ⇒ Y are equivalent.

Display logic gets its name from (or lends its name to, I forget) a distinctive property it has: the display property. The display property says that when one has a structure Z as a part of a structure X, e.g. X(Z) ⇒ Y (I am using ⇒ for the turnstile since I can't find a turnstile in html), it is possible using display equivalences alone to isolate that structure in antecedent or consequent position, depending on its initial polarity, e.g. to obtain from e.g. X(Z) ⇒ Y something like X(Y) ⇒ Z. [Edit: This isn't quite right. It should say, if Z is a succeedent structure, from X(Z) ⇒ Y, obtain W ⇒ Z, where W is a structure resulting solely from structural manipulations of X, Y, and Z. If Z is an antecedent structure, the preceding will be something like: from X(Z) ⇒ Y, obtain Z ⇒ U, where U is a structure resulting solely from structural manipulations of X, Y, and Z.] This allows us to display the structure Z all by its lonesome. This is important because then we can give introduction rules for logical connectives that do not involve structure in or around the formula containing the logical connective. For example, this gives us a way to formulate a single left- and right-introduction rule for → for a range of logical systems that differ only in structural assumptions. To put things another, slightly more tendentious way, making the structural assumptions explicit in the form of operators, allows us to get a better grip on the introduction rules.

By some straightforward inductions, we can get the following. For any formula M, M ⇒ M and for any structures
X, Y and formulas M, if one has X ⇒ M and M ⇒ Y, then one can infer X ⇒ Y. The latter is the cut rule. Indeed, part of the appeal of display logic is that it provides a general, easy (?) way of showing cut elimination. The preceding Belnap takes to show that left and right introduction rules are in harmony or have the same meaning. I don't think I can rehearse that argument in detail here. How cut is used seems fairly clear, but I'm not sure as to the use of the other one.

Of course, once one has introduced some structure connectives, there is no reason to stop. For example, instead of having an empty antecedent or consequent, which operates differently depending on polarity, one could introduce a structural constant to mark that. Or, instead of side conditions, one could introduce a structure connective, I, to mark modal formulae. Doing this, the right-introduction rule for [] is: from XºI ⇒ M, infer X ⇒ []M. I like this because it turns out that when the cut formula is []M, then there must be additional structure on X, namely the structure for marking the modal context. This seems to me to capture the idea of side conditions on derivations involving modality, which is what 'I' was introduced to do.

Tuesday, April 22, 2008

I like words

I've been reading Paul Halmos's autobiography, which he called his automathography, as a break from grading. It does not go into much detail about his personal life, focusing instead on his life as a professional mathematician. In some ways, it is sort of like Quine's autobiography. That didn't focus on his personal life either. Rather it focused more on his travels and his life as a professional philosopher. In fact, both books had decent chunks about various conferences and such. Halmos's book was less interesting on this front since I wasn't familiar with a lot of the people he talked about. I'm not sure that it would be that much more interesting if one knew those people. Halmos's book is boring in roughly the same way that Quine's book was boring, although they are interesting in similar but different ways. (Or maybe Quine's book just seems better than it was when viewed through the rosy lens of memory.)

Halmos is a good writer. He had some tips on writing which seem applicable to philosophy (and moreso to logic). His writing even made mundane things like editing a journal and writing a textbook moderately entertaining. It was kind of interesting to see his evaluation of academia. I'm not sure how much of it was completely unknown. I think the most interesting part of this aspect of the book was about writing recommendation letters. He included a few examples with some commentary. I wonder how philosophy recommendations compare. I've never seen any philosophy recommendation letters.

Another part of the book that was interesting for me was Halmos talking about logic. Halmos was one of the innovators in algebraic logic. He did a lot of work on boolean algebras and polyadic algebras. It was interesting because he expressed a sentiment that I've heard echoed by several friends who studied math. Halmos thought that logic was not particularly interesting and largely consisted of fussy bookkeeping stuff, e.g. keeping a stock of distinct variables. He even thought that logic didn't provide a helpful toolbox of techniques. Yet, when he was able to see that classical propositional logic corresponded to boolean algebra and then came up with polyadic algebras, logic became more interesting. I think his phrasing was that he could make out the algebraic content of various theorems. The sentiment he expressed has kind of puzzled me in the past, but his way of putting it made it seem a little clearer. It still is somewhat puzzling, but this might be partly because of my philosophical upbringing (the only word for it is the German "Bildung") which was at a department in which there were several logicians and the math classes I took tended to be logic friendly. In any case, I'd almost recommend studying some algebraic logic for the excuse of reading some of Halmos's work.

The opening of his book really hooked me, which provided a decent motivation for finishing it. It is: "I've always liked words more than numbers." I can dig that. Words are neat.

Saturday, April 19, 2008

Visualizing accessibility

In Kripke frames, the accessibility relation can be represented nicely in a diagram. For S4, the accessibility relations form nice trees. For S5 the accessibility relation is still easily diagrammed. This is one of the nice things about working with Kripke frames.

Recently I've been studying Routley-Meyer frames for relevance logic a bit. These are frames with a three place accessibility relation. (Actually, I am coming at these via Dunn's generalization of them, gaggles.) I sort of have an idea of what the relation is, but this understanding is based primarily on thinking about through the Routley-Meyer semantic clause for evaluating the relevance logic conditional. I wasn't able to come up with a nice way of diagramming these frames and Dunn's book didn't have any either. Are there any nice diagrammatic representations of the ternary accessibility relations?

Thursday, April 10, 2008

SEP on definitions

I just saw that Anil Gupta's SEP entry on definitions is now online. (HT: Richard Zach)

Wednesday, April 09, 2008

Complementation

In chapter 3 of Dunn's book, there is a detailed presentation of the lattice-theoretic interpretation of various logical operations. There is quite a bit in the section on complementation that I've been thinking about. One of them is the idea of split complementations. One way of motivating this is to stipulate an incompatibility relation and impose the following equivalence where '-' is a complementation operation:
x≤-a iff xIa.
It is natural to impose symmetry on I, but it isn't required. Imposing symmetry permits us a single complementation based on it. If it is not symmetric, then we can define another complementation '∼':
x≤∼a iff aIx.
This gives us the following: x≤ -y iff y≤∼x. This makes the pair (-,∼) a Galois connection. (I'm hoping to come up with a post about why that is neat.)

Proof-theoretically, if you drop structural assumptions until you get distinct arrows, ← and →, then you can also get split negations, with a falsity constant, by stipulating: x→⊥ is -x; ⊥←x is ∼x. In his book on substructural logics, Restall suggests understanding these in terms of sequences of actions. The former says that an action of type x followed by an action of type x→⊥ gives an action of type ⊥ while the latter says that an action of type ⊥←x followed by one of type x leads to one of type ⊥. (⊥ is used in a narrower way in Restall's book. I'm using it like he used f.)

This interpretation in terms of actions makes sense of the split negations, but it doesn't seem to mesh with the incompatibility idea. Granted, Restall doesn't motivate the idea of a split negation with I. The action interpretation results in an asymmetry, but it isn't the sort that one finds in incompatibility. Dunn doesn't offer an idea of what the asymmetric incompatibility relation comes to. Logically it makes perfectly fine sense. We've stipulated a binary relation and defined two operations in terms of it. The question, then, is why should we call it an incompatibility relation? So far I haven't been able to come up with anything. If it is symmetric, it makes sense to take it to be incompatibility. If it is not symmetric, it seems like we've changed topics, but it isn't clear to what.

In Making It Explicit, there is a footnote in which Brandom talks about one of his students trying out the idea of asymmetric incompatibility relations. The footnote, if I remember correctly, says that no interesting applications had been found for them. I don't know if they were connected to split negation operations. There isn't anything in the footnote about what motivates calling the non-symmetric relation an incompatibility relation. Brandom has developed some of the incompatibility stuff in the appendix to the fifth of his Locke Lectures. I don't know if there is any discussion of non-symmetric incompatibility relations. I'm not hopeful, as that is mostly a technical document that isn't likely to address this worry.

Sunday, April 06, 2008

Quine and ontology

In his "Things and their place in theories" Quine presents his views on ontology. In particular, he presents three sorts of ontologies one could adopt. The first is the physical object ontology which, unsurprisingly, includes physical objects amongst its things. The second is the space-time ontology which replaces physical objects with the regions of space-time points that the objects occupied. This is broadened then to include empty regions as well. The third sort of ontology replaces space-time points with quadruples of real numbers, and so gets by with just set theory. Sets are all you need to make sense of science on this view.

The first two options seem reasonably attractive. What strikes me as odd about the third option, Quine's favored one, is that it doesn't seem to mesh with what he says about perception. The essay opens by saying, "Our talk of external things, is just a conceptual apparatus that helps us to foresee and control the triggering of our sensory receptors in the light of previous triggering of our sensory receptors." The sensory stimulations constitute observations and perceptions. I'm going to set aside how Quine gets from the former to the latter. It makes sense to say that we observe physical objects and, possibly, that we observe space-time points or regions of them. We don't observe sets or quadruples of reals. Thus, we shouldn't adopt this third option for ontology, since it doesn't help us explain our sensory stimuli and observations.

My uneasiness can probably be alleviated with a more thorough going Quinean response. In the above, I said that "we don't observe sets" understanding "we" in the physical object sense and "observe" in the roughly standard sense. Replacing objects with quadruples of reals (or sets of them) will require likewise adjusting how we understand "observe". Proxy functions could be inserted appropriately. It will have to be some set-theoretic relation containing the observers and the observed quadruples. The observers, us, will likewise have to be understood in terms of sets of quadruples of reals, with certain quadruples appearing in the sets that represent our surface irritations. This more consistently Quinean approach responds to my earlier worry, but there is a lingering one, although it seems a bit lame. The worry is that I am not a set of quadruples of reals, although I could be represented as one. If we are worried about what there is, why should we concern ourselves with representations of things, rather than the things themselves? It happens that in this case the representations are full-fledged objects in their own rights. I'm not sure how moved I am by this last consideration. In writing it up, I started to think I was missing something in Quine's view.

Saturday, April 05, 2008

Carnap and foundations

Things are rather busy around here between the end of term, prospectives visiting, and me trying to finish up my model theory work from last semester. I'm having some trouble posting because of this. Some ideas are bouncing around, but I haven't yet posted them. Here's a short bit on one of them.

Several philosophers have leveled objections against parts of Carnap's logical syntax project. Among these include Goedel, Kleene, Friedman, and Beth. Their objections focus on the Carnap's views on foundational issues in the philosophy of math. This comes out especially in the Goedel and Beth objections (the latter of which I want to discuss in more detail later). The Friedman (primarily in his "Tolerance and analyticity in Carnap's philosophy of mathematics") and Kleene objections find tension between the principle of tolerance and foundational issues. All of these objections share a common move, and it is this theme that some of the responses, such as Ricketts's and Goldfarb's, want to dispute. The move is to saddle Carnap's logical syntax project with more interest and involvement in foundational issues than Carnap had.

It sounds weird, in a way, to deny that Carnap had foundational aims. He talks about logicism a lot and seems to adopt that thesis. He wrote a pamphlet entitled "Foundations of Math and Logic." Nonetheless, there is something to the response of denying these foundational aims to Carnap. Carnap's logicism is different from Frege's and Russell's. Carnap doesn't try to reduce math to logic in Frege's sense. I'm not sure why exactly he is a logicist. He wants the mathematical part of any language to be valid or contra-valid, but this is not a reduction to logic. The pamphlet mentioned does not really address foundational issues in the way that Frege or Brouwer did. The parts that talk about foundations are surprisingly short. They seemed to be more interested in dismissing the questions than resolving them.

I wasn't looking for Carnap's discussion of foundational issues in Logical Syntax when I read it However, I was looking for the discussion of foundational issues in "Foundations." From the little bit that was there, it seemed consistent to deny that Carnap wanted to engage in the foundational debates of his time. He rather wanted to sweep them aside with the invocation of his principle of tolerance. I haven't gone back to LSL to check this, but I'm expecting to see a similar pattern there. Are there any places in LSL that are particularly hard to read as anything but Carnap engaging in foundational debates?

It might help to clarify what I mean by "foundational debates." An example would be the disagreement between the intuitionists and the classical mathematicians. The intuitionists, roughly, wanted to reject certain forms of mathematical reasoning and the results that followed from them. The classical mathematicians wanted to keep all of classical mathematics since they regarded it as coherent and correct. Classical math provided more tools, and possibly essential ones, for scientific inquiry. The intuitionists provided arguments that classical math was incoherent (or bad or...) and the classical mathematicians provided responses. By denying that Carnap was engaging in foundational debates, one is denying that Carnap wanted to provide arguments against intuitionism and for classical math. One is denying that he wanted to adjudicate the dispute and settle who had the better arguments. Instead, he invoked the principle of tolerance to sidestep the issues entirely. This is not to say that he doesn't have sympathies. He preferred classical math, at least from LSL. This was not the product of a settled foundational debate though.