Wednesday, December 31, 2008

A few reflections on the fall term

I'm slightly late with this, but I'll go ahead with it anyway. A few posts ago I said that I'd come up with some reflections on the term. This is more for my benefit than for the benefit of others, but someone might find it interesting. Read more

This term I took three classes: proof theory with Belnap, philosophy of math with Wilson, and truth with Gupta. They meshed well. I had hoped that I would generate a prospectus topic out of them. I have some ideas, but nothing as concrete as a prospectus topic. I also met regularly with Ken Manders to talk about prospectus ideas. This helped me come up with some promising stuff that I'll hopefully work out some over the next few weeks.

I'll start with the proof theory class. This was class number two on the topic. This time around it was with Belnap and it focused on substructural logics, particularly relevance logic. I was happy with that since I've been lately bitten by the relevance logic bug. We did some of the expected stuff and spent a while going through Gentzen's proof of cut elimination in detail. I think I got a pretty good sense of the proof. What I was rather pleased with was the forays into combinators and display logic. We did some stuff with combinators in the other proof theory class, but it was relatively unclear to me, at the time, why. This time the connection between combinators and structural rules was made clear. Display logic is, I think, quite neat. I ended up writing a short paper on some philosophical aspects of display logic.

Next up is the the philosophy of math class with Wilson. The focus of the class was on Frege's philosophy of math, particularly as it related to some of the mathematical developments of his time. I'm not sure how much it changed my view of Frege. I think it did make it clear that interpreting Frege is less straightforward than I previously thought. I'm more convinced of the importance of seeing Frege's work in logic in the context of the worries about number systems and foundations going on in the late 19th century. I suspect that the long-term upshot of this class will be what it got me to read. I was turned on to a few books on the history of math, e.g. Gray's and Grattan-Guinness's. There is a lot of material in those books on the development of concepts that is fairly digestible. I'm thinking about writing a paper on concept development in the late 19th century in something like the vein of Wandering Significance. We'll have to see how that goes. Through some of these readings I got a bit stuck on how to characterize the algebraic tradition in logic. There's something distinctive there, especially when compared to Frege's views, and I'd like to have a better sense of what is going on. It seems like it would help with interpreting the Tractatus.

Last is Gupta's truth and paradox seminar. We covered hierarchy theories of truth briefly, moved on to fixed point theories, then spent a long while on the revision theory as presented in the Revision Theory of Truth. This class was excellent. I think I got a decent handle on the basic issues in theories of truth. There was a lot of formal work and that was balanced against non-formal philosophical stuff fairly well. I'm doubtful I will write a dissertation on truth theory, but one upshot is that it has given me some good perspective on the notions of content and expressive power. These are treated well in RTT, and the discussion there has helped me generate some ideas that I'm trying to develop. The other upshot is that I got to study circular definitions. I have an interest in definitions anyway, and circular definitions, I've confirmed, are awesome. I wrote a short paper for this class on some things in the proof theory of circular definitions. I think it turned out well.

As I mentioned before, I've finished my coursework as of this term. Next term I'll be focusing primarily on working out a prospectus. I'll also be attending a few seminars: category theory with Steve Awodey, philosophy of math with Manders, and the later Wittgenstein with Ricketts. I'm not sure to what degree, if at all, these will figure in my dissertation, but they seem too good to pass up.

Blogging has been somewhat light this term, since I got a bit overburdened with regular meetings with professors, finishing some papers, and taking two logic classes. It was fairly productive though. Posting should increase some next term. (There's been a general slow down in the philosophy blogosphere, at least the parts I read, this semester that I'm a little bummed to have contributed to, but maybe things will perk up going forward, or maybe not.)

Next term should start well. The classes look good. I'm going to get to spend the next few weeks finishing up some reading and trying to formulate a few thoughts better. In late January, Greg Restall will be giving a talk at Pitt, which should be fun.

Also, this was post number 400 for my blog. Between all the announcements and links, that means I've gotten a good 200 or so vaguely philosophical posts online.

Tuesday, December 30, 2008

Definitions and content

Reading the Revision Theory of Truth has given me an idea that I'm trying to work out. This post is a sketch of a rough version of it. The idea is that circular definitions motivate the need for a sharper conception of content on an inferentialist picture, possibly other pictures of content too. It might have a connection to harmony as well, although that thread sort of drops out. The conclusion is somewhat programmatic, owing to the holidays hitting. Read more

In Articulating Reasons, there is a short discussion of harmony. It begins with a discussion of Dummett on slurs. Brandom says, "If Dummett is suggesting that what is wrong with the concept Boche is that its addition represents a nonconservative extension of the rest of the language, he is mistaken. Its non-conservativeness just shows that it has a substantive content, in that it implicitly involves a material inference that is not already implicit in th contents of other concepts being employed." (p. 71)
Being a non-conservative addition to a language means that the addition has a substantive content. It licences inferences to conclusions in which it does not appear that were not previously licenced. I want to point out something that doesn't seem to fit happily within this picture.

In the Revision Theory of Truth, Gupta and Belnap present a theory of circular definitions and several semantical systems for them. I will focus on the weakest system, S0. According to S0, the addition of circular definitions to a language constitutes a conservative extension of the language. That being said, it seems like the introduction of circular definitions brings with it a substantive content, given by the revision sequences and the set of definitions. From the quote, it seems that Brandom is saying that non-conservativeness is sufficient for substantive content, not necessary. We would have a nice counterexample otherwise. If we look at stronger systems, the Sn systems (n>0), then it turns out that the same set of definitions may not yield a conservative extension of the language.

It might be misleading to cast things in terms of the semantical systems since Brandom casts things in terms of inferential role. Gupta and Belnap offer proof systems for the S0 and Sn systems. These proof systems are sound and complete with respect to the appropriate semantical system. In the system C0, the addition of a set of definitions to a language yields a conservative extension, as would be expected.

The fact that circular definitions are conservative in S0 doesn't upset Brandom's claim above. It doesn't seem like we want to say that all circular definitions lack substantive content. Unlike non-circular definitions that are conservative over the base language and eliminable, they are not mere abbreviations. The addition of circular definitions has semantical consequences in the form of new validities. Circular definitions point out the need for necessary conditions on the notion of substantive content, since one would expect that there are circular definitions that aren't substantive, e.g. one with a definiens that is tautological. Alternatively, a sharper notion of content, and so substantive content, would help clarify what is going on with circular definitions of different stripes.

Saturday, December 27, 2008

Two Quinean things

In my browsing of Amazon, I came across something kind of exciting. There are two new collections of Quine's work coming, edited by Dagfinn Follesdal and Douglas Quine. They are Confessions of a Confirmed Extensionalist and Other Essays and Quine in Dialogue. The former appears to be split between previously uncollected essays, previously unpublished essays, and more recent essays. The latter appears to consist of a lot of lighter pieces, reviews, and interviews. Amazon doesn't seem to have the tables of contents available yet, but they are available at the publisher's page, here and here. Both look promising for those that are interested in Quine. I'm curious to read Quine's review of Lakatos in the latter volume. It could be wildly disappointing, but it would be nice to see Quine's reaction a philosophy of math that is so at odds with his own. [Edit: In the comments, Douglas Quine points out that more detailed information for the new volumes, as well as information on other centennial events, are up on the W.V. Quine website.]

The other Quinean thing is a question. Is there anywhere in Quine's writings where he discusses the role of statistics and probability in modern science? It seemed like there could be something there that could be used as the beginning of an objection to Quine's fairly tidy picture of scientific inquiry. (This thought is sort of half-baked at this point.) Over the holidays I couldn't think of anywhere Quine talked about how it fit into his epistemological views. It seemed odd that Quine didn't ever discuss it, given the importance of statistics in science, so I'm fairly sure I'm forgetting or overlooking something. There might be something in From Stimulus to Science or Pursuit of Truth, but I won't have access to those for a few days yet. [Edit: In the comments Greg points out that Sober presented a sketch of a criticism along the lines above in his paper "Quine's Two Dogmas," available for download on his papers page.]

Wednesday, December 17, 2008

Question about negations

Does anyone know of any proof systems in which some but not all contents have negations? I'm looking for examples for a developing project.

Semantic self-sufficiency

I'm trying to work out some thoughts on the topic of semantic self-sufficiency. My jumping off point for this is the exchange between McGee, Martin and Gupta on the Revision Theory of Truth. My post was too long, even with part of it incomplete, so I'm going to post part of it, mostly expository, today. The rest I hope to finish up tomorrow. I'm also fairly unread in the literature on this topic. I know Tarski was doubtful about self-sufficiency and Fitch thought Tarski was overly doubtful. Are there any particular contemporary discussions of these issues that come highly recommended? Read more

In their criticisms of the revision theory, both McGee and Martin say that a fault of the revision theory is that it is not semantically self-sufficient. The metalanguage for its characterization of truth must be stronger than the object language. Martin puts the point as follows.

[Gupta and Belnap] dismiss the goal of trying to understand truth for a language entirely from within the language. Although they point out some problems with the very notion of a universal language... The problem that the semantic paradoxes pose is not primarily the problem of understanding the notion of truth in expressively rich languages, it is the problem of understanding our notion of truth. And we have no language beyond our own in which to discuss this problem and in which to formulate our answers.

McGee expresses a similar sentiment.

Gupta, in his reply to Martin and McGee, presents the objection in the following way. (I follow Gupta pretty closely here.) (1) a semantic description of English must be possible. (2) This description must be formulable in English itself, i.e. English must be semantically self-sufficient. (3) The revision semantics for a language can only be constructed in a richer metalanguage. (4) Revision semantics is therefore not suitable for English. (5) Therefore, revision semantics doesn't capture the notion of truth in English. The problem Gupta diagnoses is that this takes the aim of the project of investigating truth to be one thing, while he sees it as another. McGee and Martin see the goal as the constriction of of a language L that can express its own semantic theory. Gupta sees the goal as giving a semantics of the predicate "true in L" of L, generally. He calls the former the self-sufficiency project and the latter the truth project. Gupta points out that the truth project, the goal of the revision theory, is independent of the self-sufficiency project, which is not independent of the truth project. (Gupta also gave a partial, positive answer to the self-sufficiency project, for languages that lack certain sorts of self-reference.) Gupta expresses some doubt about the prospects of full success in the self-sufficiency project. In what follows I'll present Gupta's arguments.

To set the stage for the doubts, we need to idealize English, or any language, as frozen in some stage in its development. Otherwise there are possibly extraneous concerns that arise about the self-sufficiency at different times and with respect to different times. If we take English to be fixed at some stage of its development, there is a problem about spelling out what conceptual resources it contains and that are at the disposal of the semantic theory. Part of the reason for thinking that English is semantically self-sufficient is that has a great deal of expressive power and flexibility. Expressions can be cooked up to denote any expression or thing one might want. Gupta puts it this way: there are expressions whose interpretations can be varied indefinitely.

Gupta poses a dilemma. Either the semantic self-sufficiency of English is due to this flexibility or it is not. If it is, then there is no motivation for semantic self-sufficiency. The self-sufficiency project supposes fixed conceptual resources, so the flexibility of English cannot motivate that project after all. If not, we can suppose that English has fixed conceptual resources. Given that, what reason is there to think that English is self-sufficient? There is no empirical confirmation of this. There doesn't seem to be much by way of a priori reason for thinking it either. In either case, there doesn't seem to be any motivation for taking English to be self-sufficient. He gives one motivation, although the discussion and criticism of that could stand independently of the dilemma posed here.

Gupta notes that the thing most often cited in favor of thinking that English is semantically self-sufficient is the "comprehensibility of English by English speakers." This has an ambiguity. In one sense, 'comprehensibility' is the ability to use and understand the language. In another sense, it is the ability to give a systematic semantic theory for the language. The claim is then that English speakers can give a systematic semantic theory for their language. In the former sense, the claim is trivial. In the latter sense, there is not really any reason to believe it.

This point seems to me to be allied to thoughts about following rules and norms. One can follow rules quite easily. It is often quite hard to make explicit the rules and norms one is following and how they systematically fit together. If giving a semantic theory involves something like this, then it wouldn't be unexpected that some difficulties would arise. I don't think Gupta has this in mind though.

Gupta raises a second worry. Even if the previous one can be overcome, there is the problem of giving a semantic theory for the stage of English in that stage because there is gap between the ability of English speakers and the resources available at a given stage of English. The speakers can, and do, enrich their vocabulary with mathematical and logical resources, and the ability to do this might be intimately bound up with the ability to give a semantics for English. Appealing to the abilities of speakers of a language to motivate semantic self-sufficiency then seems to create a problem. One could idealize the speakers as similarly frozen at a given stage, along with their language, so that no developmental capacities enter into the picture. To claim semantic self-sufficiency here is to simply disregard the previous objection. It is also unclear why one would think that in such a scenario semantic self-sufficiency would obtain.

The point of Gupta's criticism is not to refute all hope of semantic self-sufficiency. It is rather to cast doubt on it and its motivations. If he is right, then it shouldn't be taken as a basic desideratum of a theory of truth, or any semantic theory. Then those parts of McGee's and Martin's objections lose their force. I think it also indicates some of the places where claims about semantic self-sufficiency need to be sharpened, which I'll try to address in a post in the near future.

Monday, December 15, 2008

Pointing out a review

I wanted to avoid having another post that was primarily a link, but I seem to be having some difficulty of getting a post together lately. In any case, there is a review of Gillian Russell's Truth in Virtue of Meaning up at NDPR. The review seems to be fairly detailed, so I'll let it stand on its own.

Wednesday, December 10, 2008

I declare victory over coursework

Today I submitted the last paper I needed to finish in order to fulfill my course requirements at Pitt. Now I get to concentrate on finishing up some side projects and working on my nascent prospectus. Yay!

An end of term reflection will likely follow in the next week or so.

Saturday, December 06, 2008

Representation theorems and completeness

This term I've spent some time studying nonmonotonic logics. This lead me to look at David Makinson's work. Makinson has done a lot of work in this area and he has a nice selection of articles available on his website. One unexpected find on his page was a paper called "Completeness Theorems, Representation Theorems: What’s the Difference?" A while back I had posted a question about representation theorems. In the comments, Greg Restall answered in detail. Makinson's paper elaborates this some. He says that representation theorems are a generalization of completeness theorems, although I don't remember why they were billed as such. There are several papers on nonmonotonic logic available there. "Bridges between classical and nonmonotonic logic" is a short paper demystifying some of the main ideas behind non-monotonic logic. The paper “How to go nonmonotonic” is a handbook article that goes into more detail and develops the nonmonotonic ideas more. Makinson has a new book on nonmonotonic logic, but it looked like most of the content, minus exercises, is already available in the handbook article online.

Thursday, December 04, 2008

Brandomia

There is now a multimedia section up on Brandom's website. It includes the videos of the Locke lectures with commentary as given in Prague as well as the Woodbridge lectures as given at Pitt. I think one of the videos of the latter features a mildly hard to follow muddle of a question by me. If you are in to that stuff, it is well worth checking out.

Friday, November 28, 2008

Just a reminder [deadline extended]

The deadline for the Pitt-CMU conference [edit: has been extended to 12/15. Please submit!]

Tuesday, November 25, 2008

Oh, Dover

I just found out that Yiannis Moschovakis's Elementary Induction on Abstract Structures was released as a cheap Dover paperback over the summer. It was previously only available in the horrendously expensive yellow hardback series by... North-Holland, according to Amazon. The secondary literature on the revision theory of truth has recently nudged me into looking at this book, and it is nice to know that it is available at a grad-student-friendly price. Philosophical content to follow soon.

Monday, November 24, 2008

The birth of model theory

I just finished reading Badesa's Birth of Model Theory. It places Löwenheim's proof of his major result in its historical setting and defends what is, according to the author, a new interpretation of it. This book was interesting on a few levels. First, it placed Löwenheim in the algebraic tradition of logic. Part of what this involved was spending a chapter elaborating the logical and methodological views of major figures in that tradition, Boole, Schröder, and Peirce. Badesa says that this tradition in logic hasn't received much attention from philosophers and historians. There is a book, From Peirce to Skolem, that investigates it more and that I want to read. I don't have much to say about the views of each of those logicians, but it does seem like there is something distinctive about the algebraic tradition in logic. I don't have a pithy way of putting it though, which kind of bugs me. Looking at Dunn's book on the technical details of the topic confirms it. From Badesa, it seems that none of the early algebraic logicians saw a distinction between syntax and semantics, i.e. between a formal language and its interpretation, nor much of a need for one. Not seeing the distinction was apparently the norm and it was really with Löwenheim's proof that the distinction came to the forefront in logic. A large part of the book is attempting to make Löwenheim's proof clearer by trying to separate the syntactic and semantic elements of the proof.

The second interesting thing is how much better modern notation is than what Löwenheim and his contemporaries were using. I'm biased of course, but they wrote ax,y for what we'd write A(x,y). That isn't terrible, but for various reasons sometimes the subscripts on the 'a' would have superscripts and such. That quickly becomes horrendous.

The third interesting thing is it made clear how murky some of the key ideas of modern logic were in the early part of the 20th century. Richard Zach gave a talk at CMU recently about how work on the decision problem cleared up (or possibly helped isolate, I'm not sure where the discussion ended up on that) several key semantic concepts. Löwenheim apparently focused on the first-order fragment of logic as important. As mentioned, his work made important the distinction between syntax and semantics. Badesa made some further claims about how Löwenheim gave the first proof that involved explicit recursion, or some such. I was a little less clear on that, although it seems rather important. Seeing Gödel's remarks, quoted near the end of the book in footnotes, on the importance of Skolem's work following Löwenheim's was especially interesting. Badesa's conclusion was that one of Gödel's big contributions to logic was bringing extreme clarity to the notions involved in the completeness proof of his dissertation.

I'm not sure the book as a whole is worth reading though. I hadn't read Löwenheim's original paper or any of the commentaries on it, which a lot of the book was directed against. The first two chapters were really interesting and there are sections of the later chapters that are good in isolation, mainly where Badesa is commenting on sundry interesting features of the proof or his reconstruction. These are usually set off in separate numbered sections. I expect the book is much more engaging if you are familiar with the commentaries on Löwenheim's paper or are working in the history of logic. That said, there are parts of it that are quite neat. Jeremy Avigad has a review on his website that sums things up pretty well also.

Sunday, November 16, 2008

Another plea for a reprint

While I am coming to the pleas for reprints late, it occurred to me that it would be very nice to have a Dover reprint of the two volumes of Entailment. Of course, I wouldn't complain if Princeton UP issued a cheap paperback version. They are out of print and are individually prohibitively expensive. There also are not enough copies of volume 2 floating around. It can be hard to get one's hands on volume 2 around here, which is unfortunate since I've lately needed to look at it.

[Edit: Looking at the comments on the thread I linked to, it also strikes me that it'd be nice to have a volume on the theme of logical inferentialism. It would have reprints of Gentzen's main papers, some of Prawitz's stuff, Prior's tonk article, Belnap's reply and his display logic paper, an appropriate smattering of stuff from Dummett and Martin-Loef, possibly some of the technical work done by Schroeder-Heister, Kremer's philosophical papers on the topic, Hacking's piece, and some of Read's and Restall's papers. I'm sure there are others that could go into it, although I think what I've listed would already push it into the two volume range. Dare to dream...]

Friday, November 14, 2008

Combinatory logic

There's what appears to be a nice article on combinatory logic up at the Stanford Encyclopedia, authored by Katalin Bimbo. The article briefly mentions Meyer's work on combinators, and it talks about the connection between combinators and non-classical logics. However, it doesn't seem to make explicit the connection between combinators and structural rules in a sequent calculus, which Meyer calls the key to the universe.

Bimbo's website notes that she recently wrote a book with Dunn on generalized galois logics, which looks like it extends the last two chapters of Dunn's algebraic logic text. I'd like to get my hands on that. Time to make a request to the library...

Saturday, November 08, 2008

A note on expressive problems

In chapter 3 of the Revision Theory of Truth (RTT), Gupta and Belnap argue against the fixed point theory of truth. They say that fixed points can’t represent truth, generally, because languages with fixed point models are expressively incomplete. This means that there are truth-functions in a logical scheme, say a strong Kleene language, that can not be expressed in the language on pain of eliminating fixed points in the Kripke construction. An example of this is the Lukasiewicz biconditional. Another example is the exclusion negation. The exclusion negation of A, ¬A , is false when A is true, and true otherwise. The Lukasiewicz biconditional,A≡B , is true when A and B agree on truth value, false when they differ classically, and n otherwise. Read more


The shape of this argument seems to be the following. The languages we use appear to express all these constructions. If they don’t, we can surely add them or start using them. The descriptive problem of truth demands that our theories of truth work in languages that are as expressive. (Briefly, the descriptive problem is the problem of characterizing the behavior of truth as a concept we use, giving patterns of reasoning that are acceptable and such.) The fixed point theories prevent this, therefore they cannot be adequate theories of truth.

I’m not sure how forceful this argument is. I’m also not quite sure how damaging expressive incompleteness is. The expressive incompleteness at issue in the argument is truth-functional expressive incompleteness. There is lots of expressive incompleteness present in the languages under consideration. This is distinct from the language expressing its own semantics. The semantic concepts required for that need not be present since there will presumably be non-truth-functional notions used. It also isn’t part of the claim with respect to the languages we use. I will stop to discuss this point for a moment since I find it interesting.

The languages we use may or may not be able to express their own semantics. As Gupta says, rightly I think, one should be suspicious of anyone who claims that we must be able to express our semantic theories in the languages they are theories for. The primary reason for this is that we don’t know what a semantic theory for the complete language would be. The extant semantic theories we have work for small fragments that are regimented highly. Further, these theories are only defined on static languages, whereas the ones we use appear to be extensible. Additionally, these theories tend to be coupled to a syntactic theory that provides the structure of sentences on which the semantics recurs. There is no such syntactic theory for the languages we use either. The shape a semantic theory for our used language might be very different than the smaller models currently studied. It might not even contain truth. The requirement that a language be able to express its own semantic theory seems to stem from an idealization based on current semantic theories that, if the above is right, is illicit. The question of expressive completeness is distinct from this question of semantics. The question of what it is to give a semantics for a language in that language is interesting, and is raised in criticisms by both McGee and Martin. I hope to post on that soon.

One question that strikes me is how central to the descriptive problem is this expressive power? Expressive power itself is a notion that is somewhat obscure until one moves to a formal context in which one can tease apart distinctions. For example, it isn’t at all apparent that ‘until’ isn’t expressible using the standard tense operators or constructions, even though all these are, arguably, readily apparent in the languages we use. It isn’t clear, then, that the notion of expressibility is even workable until we move to a more theoretical setting from the less theoretic setting of language in use.

If we move to a more theoretical setting and discover that what we thought was vast expressive power has to be curtailed, then it isn’t clear that our earlier intuition is what must be preserved. One could hold out for a theory of truth that preserved it. Gupta clearly thinks this is one to hold on to. Perhaps this is what a detailed statement of the descriptive problem demands.

Something else that I wonder about this line of thought is how common expressive incompleteness, of the truth-functional kind, is among the most prominent logical systems. We have it in the classical case. In limited circumstances, we have it even with the addition of the T-predicate. In any case, we probably don’t want to stop with just logics that treat only truth-functions and T-predicates. We might want to add modal operators of some kind, and these are not truth-functional. What sort of expressive problems are generated, or not, then? I'm not sure, although there is an excellent chapter in RTT on comparing the expressive power of necessity as a predicate and as a sentence operator.

Tuesday, November 04, 2008

PSA

I'm going to be in Pittsburgh this weekend. If any readers or fellow bloggers are going to be in town for the PSA and want to meet up drop me a line. There are at least a few people I'm hoping to meet.

Thursday, October 30, 2008

A note on the revision theory

In the Revision Theory of Truth, Gupta says (p. 125) that a circular definition does not give an extension for a concept. It gives a rule of revision that yields better candidate hypotheses when given a hypothesis. More and more revisions via this rule are supposed to yield better and better hypotheses for the extension of the concept. This sounds like there is, in some way, some monotony given by the revision rule. What this amounts to is unclear though.

For a contrast case, consider Kripke's fixed point theory of truth. It builds a fixed point interpretation of truth by claiming more sentences in the extension and anti-extension of truth at each stage. This process is monotonic in an obvious way. The extension and anti-extension only get bigger. If we look at the hypotheses generated by the revision rules, they do not solely expand. They can shrink. They also do not solely shrink. Revision rules are non-monotonic functions. They are eventually well-behaved, but that doesn't mean monotonicity. One idea would be that the set of elements that are stable under revision monotonically increases. This isn't the case either. Elements can be stable in initial segments of the revision sequence and then become unstable once a limit stage has been passed. This isn't the case for all sorts of revision sequences, but the claim in RTT seemed to be for all revision sequences. Eventually hypotheses settle down and some become periodic, but it is hard to say that periodicity indicates that the revisions result in better hypotheses.

The claim that a rule of revision gives better candidate extensions for a concept is used primarily for motivating the idea of circular definitions. It doesn't seem to figure in the subsequent development. The theory of circular definitions is nice enough that it can stand without that motivation. Nothing important hinges on the claim that revision rules yield better definitions, so abandoning it doesn't seem like a problem. I'd like to make sense of it though.

Friday, October 24, 2008

Question about the theories of models and proofs

I seem to have gone an unfortunately long time without a post. I'm not quite sure how that happened. I'll try to get back on the wagon.

I have a question that maybe some of my readers might be able to answer. What are some good sources of either criticisms of proof theoretic semantics/proof theory from the model theoretic standpoint or criticisms of model theoretic semantics/model theory from the proof theoretic standpoint? I'm sure there is a lot out there for the former that references the incompleteness theorems. I'm not sure where to look for the latter. From a post at Nothing of Consequence I've found a paper by Anna Szabolcsi that spells out a few things. Beyond that, I'm not terribly sure where to look.

Friday, October 10, 2008

Cut and truth

Michael Kremer does something interesting in his "Kripke and the Logic of Truth." He presents a series of consequence relations, ⊧(V,K), where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation ⊧(V,K), where V is the strong Kleene scheme and M is the class of all fixed points. He proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.


As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that 'A' is the quote name of the sentence A.) T'A' is atomic no matter how complex A is. ∃xTx is more complex than T'∃xTx' despite the latter being an instance of the former. Woe for the standard method of proving cut elimination.

Kremer's method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I'm not sure this saves on work, per se.

Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction.

Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as "a detour through semantics." He doesn't show that a proof theory alone will be unable to prove the result, just that the standard method won't do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel's incompleteness results. There are some differences though. Goedel's results are for systems much stronger than what Kremer is working with. Also, one doesn't get cut elimination for the sequent calculus formulation of arithmetic.

Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I'm not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn't entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don't have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic's proof theory and model theory that might be helpful...

Thursday, October 09, 2008

Pitt-CMU conference update

I'm pleased to announce that Chris Pincock, currently at the Center for Philosophy of Science, has agreed to be the faculty speaker at the conference. Hartry Field is the keynote speaker. For more info see the website.

Monday, September 29, 2008

Call for papers: 2009 Pitt-CMU Philosophy Graduate Student Conference

I'm pleased to announce the 2009 Pitt-CMU Philosophy Graduate Student Conference.

Keynote speaker: Hartry Field
Theme: Truth, Meaning and Evidence
Facutly speakers: TBA
When: March 28, 2009
Where: CMU

Call for papers: The deadline for submisisons is December 1, 2008. More information can be found here.

Also, read the reviews of last year's conference by Ole here and by Errol here.

Two minutes

Here are some more thoughts on Wittgenstein on the foundations of math. For those interested, be sure to check out the prose interpretation of Wittgenstein on math and games at Logic Matters. In part 6 of the Remarks on the Foundations of Math, Wittgenstein presents several thought experiments to probe a cluster of notions, calculation, proof and rule. One of these is two-minute England in section 34.

Two-minute England is described in the following way. God creates a country in the middle of the wilderness that is physically just like England. The caveat is that it only exists for two minutes. Everything in this new country looks like stuff in England. In particular, one sees some people doing stuff that exactly mimics what English mathematicians do when they do math. This person is the one to focus on. Wittgenstein asks, "Ought we to say that this two-minute-man is calculating? Could we for example not imagine a past and a continuation of these two minutes, which would make us call the process something quite different?"

The questions, I take it, indicate doubt that we must take the two-minute-man as calculating. We can, but it is not compulsory. This is because there is no reason, given what we've observed, to think that he must be calculating. In this case there is no fact about it. We might be tempted to attribute calculation to the two-minute-man because we fill out his story with some events leading up to and following this that lead us to think that he is calculating. These events don't happen, since he only exists for two minutes. There is no wider context for this person that settles whether they were calculating, or scribbling, or regurgitating symbols seen elsewhere.

The point of this thought experiment is to present some evidence that calculation is not identifiable with any bit of mere behavior. Connecting this with other sections of part 6, the behavior only becomes calculation when it is connected up with some appropriate purposes or situated in a normative context in which it is appropriate to talk about correct or incorrect calculation. Wittgenstein's focus is to argue that various mathematical notions are like this.

This passage is preceded by one in which Wittgenstein says, "In order to describe the phenomenon of language, one must describe a practice, not something that happens once, no matter of what kind." The two-minute England thought experiment is intended to illustrate this point. I'm not sure that there is anything in the two-minute-man's life that would let us embed it in a practice of some sort.

Connecting the thought experiment up in a nice way with what precedes it requires fleshing out the notion of a practice, which I can't do. There are scattered remarks on that idea in the Remarks, which I haven't begun to put together. Despite this, it seems to me that two-minute England fits together better with what comes earlier and later in this part of the Remarks, namely that calculation isn't just a matter of behavior. This needs to be connected with the wider concern of what it means to follow a rule in order to make it a bit clearer, I think. Incidentally, I think that the rule-following discussion in the Remarks is more accessible than the discussion in the Philosophical Investigations.

Friday, September 26, 2008

Historical note

I recently heard some speculation on the origin of the term "Hilbert system" for the axiomatic systems that are used to inflict pain on logic students, particularly at the start of proof theory classes. A long time ago, at least when Gentzen wrote, they were called logistic systems. The first publication in which they were called "Hilbert systems" seems to have been Entailment, vol. 1. Does anyone know if there are earlier uses? I'd be quite happy if that turned out to be the origin of the term.

Friday, September 19, 2008

Inflammatory quote in lieu of a post

While doing some reading on relevance logic, I came acrossreview of Entailment vol. 2 by Gerard Allwein. It ends by noting that the volume doesn't include recent work on substructural logics and notes that there is an article by Avron that gives details of the relationship between linear logic and relevance logic. Allwein follows this up by saying, "It is noted by this reviewer that linear logic is intellectually anaemic compared to relevance logic in that relevance logic comes with a coherent and encompassing philosophy whereas linear logic has no such pretensions." This was written in 1994. I have no idea to what degree this is still true since I have no firm ideas about linear logic. I'm just discovering the "coherent and encompassing philosophy" that relevance logic comes with. I keep meaning to follow up on this stuff about linear logic at some point...

Since I'm talking about linear logic, I may as well link a big bibliography on linear logic.

Tuesday, September 16, 2008

Catchy theorem

I came across the following in Entailment vol. 1 trying to get unstuck on some stuff. It is, quite possibly, the snappiest way of putting a theorem of logic I've heard. A possible contender is Halmos's way of putting the crowning glory of modern logic. Here is the theorem:
Manifest repugnancies entail every truth function to which they are analytically relevant.
Anderson and Belnap, being logicians, define manifest repugnancies and being analytically relevant. The former is a conjunction of formulas such that for all atomic formulas p occurring in it, ~p also occurs in it. The latter is defined as A is analytically relevant to B if all propositional variables of B occur in A. This is within the context of the logic E, I think.

Sunday, September 14, 2008

Yet more dynamics of reason

In lecture three of Dynamics of Reason, Friedman addresses problems of relativism and rationality. This is needed since he leans so heavily on the picture of science coming out of Kuhn which on its own tends to invite charges of bad relativism. Read more

Friedman thinks that Kuhn's responses to the charge of bad relativism are inadequate. I'm not going to go through them though. Friedman's response begins by noting that Kuhn fails to distinguish between instrumental rationality and communicative rationality, which distinction is suggested by Habermas. Instrumental rationality is the capacity for means-ends reasoning given a goal to bring about, Communicative rationality "refers to our capacity to engage in argumentative deliberation or reasoning with one another aimed at bringing about an agreement or consensus opinion." (p. 54) Friedman says that instrumental rationality is more subjective while communicative rationality is intersubjective. A steady scientific paradigm underwrites, to use Friedman's phrase, communicative rationality. Revolutionary science then seems to threaten communicative rationality. A similar sort of worry arises for Carnap and his multiple linguistic frameworks.

At this point it is unclear to me how Kuhn's failure to distinguish these two kinds of rationality hurt his responses.

Friedman thinks that the scientific enterprise aims at consensus between paradigms as well as within a paradigm. It does this in three ways. One is exhibiting the old paradigm as a special or limiting case of the new paradigm. For example,
Newtonian gravity becomes a special case in relativistic gravitational theory. Friedman sketches a similar exhibition of Aristotelian mechanics as a special case of classical mechanics. This is highly anachronistic but Friedman thinks the proper response emerges when we take a more historical view. He says that the concepts and principles of the new paradigm emerge from the old in natural ways. He thinks it aids nothing to view scientists from different paradigms as speakers of wildly different and incommensurable languages. He says, "In this sense, they are better viewed as different evolutionary stages of a single language rather than as entirely separate and disconnected languages." (p. 60) (Friedman does say this transition is "natural" in a few places, e.g. p. 63. He doesn't say what he means by natural which seems to form the crux of his claim here. I'll come back to this.)

Friedman goes on to say that another way in which inter-paradigm consensus is aimed at is by successive paradigms aiming at greater generality and adequacy. One example is in the move from Aristotelian mechanics to classical, a Euclidean view of space is retained while a hierarchically and teleologically ordered spherical universe is discarded. He unfortunately doesn't address the issues raised by Kuhn on this point that it is hard to say what is meant by generality and adequacy here. Successive paradigms take up many new questions to be sure, but they also discard many old questions and solutions. They may, in fact, change what counts as adequate. This seems like an odd omission at this point in the dialectic.

Friedman reviews these points, that successive paradigms aim at incorporating old paradigms as special cases and that new concepts and principles should evolve out of old. He says that "this process of of continuous conceptual transformation should be motivated and sustained by an appropriate new philosophical meta-framework, which, in particular, interacts productively with both older philosophical meta-frameworks and new developments taking place in the sciences themselves. This new philosophical meta-framework thereby helps to define what we mean ... by a natural, reasonable, or responsible conceptual transformation." (p. 66) This bit of discussion is preceded by a quick sketch of the regulative principles of current science approximating those of "a final, ideal community of inquiry," which apparently has some affinities to Cassirer's view. (I skipped it because it didn't shed light on things for me.) Friedman gives some handwavy descriptions about how philosophical meta-frameworks determine what is a natural transformation, but it is postponed to one of the essays in the second half of the book. Up until this point, philosophical meta-frameworks didn't enter into the discussion, so it seems a bit unmotivated to bring them in here. There were some things left unresolved, but claiming that a philosophical meta-framework resolves them is unsatisfying. The follow up essay might resolve this.

One of the examples of a philosophical meta-framework in action that Friedman gives is the debate between Helmholtz and Poincare on the foundations of geometry against the backdrop of Kantian views. While both Helmholtz and Poincare were philosophical, it doesn't reflect too well, I would think, on their philosopher contemporaries that they didn't produce more (or are cited as producing more) of the philosophical framework. This particular example has a hint of claiming work by those who are more squarely in the camp of mathematical physics for the philosophical camp. I'd rather avoid going into discussions of disciplinary boundaries, but this seems like a weak justification for (or a weak example of a success of, I'm not sure which it is supposed to be) scientific philosophy. Maybe this indicates something that Friedman thinks, namely that philosophers should be more engaged with, perhaps primarily engaged with, doing hard work in the hard sciences.

At the end of the lecture, I am still wondering how the distinction between communicative and instrumental rationality was crucial. The distinction seemed to fade to the background pretty quickly and do little work. Friedman's points about inter-paradigm consensus were similar to ones that Kuhn discusses in Structure, so I'm a bit unclear on how Friedman's were adequate whereas Kuhn's were not. The follow up essays might clear things up, but I don't plan on reading them any time soon.

Friday, September 12, 2008

More dynamics of reason

The second lecture of Dynamics of Reason was much more substantive than the first. There were two main parts, a positive one and a negative one. Read more

The positive part was putting forward what I take to be the most important part of Friedman's lectures, his conception of relativized a priori. I'm not completely clear on it, but it seems like it is supposed to be a combination of Kantian and Carnapian perspectives. Kantian because it wants to answer questions about the very possibility of something, in particular how science is possible. Carnapian because it does this through Carnap's general setup of linguistic frameworks. The frameworks come with two different sorts of rules, L-rules and P-rules. L-rules are the mathematical and logical rules for forming and transforming sentences. P-rules are the empirical laws and scientific generalizations.

Following the first lecture, Friedman takes the L-rules to be constitutive of paradigms, in Kuhn's sense. They are the rules of the game, so to speak. This diverges with Kuhn in at least one respect. The L-rules are supposed to be listable whereas Kuhn doesn't think all aspects of a paradigm can be made explicit. Some can, but things like familiarity with equipment and some other sorts of know-how cannot be.

The P-rules are the substance of normal science. The P-rules are treated as internal questions. The L-rules are treated as external questions.

The relativization of the a priori comes in when Friedman notes that some P-rules, some scientific hypotheses, cannot even be formulated, much less be used to describe the world, until some bit of math is available. The math, in the corresponding L-rules, is presupposed by the P-rules. They are required for the possibility of formulating and applying the P-rules themselves, to put it in a Kantian way. One example of this is the calculus and Newton's laws of mechanics. Friedman notes that Newton's laws are, in their mathematical presentation, formulated in terms of the calculus. Additionally they only make sense against the background of a fixed inertial frame, which concept is given in an L-rule.

The claim seems to be that the L-rules of a framework provide the possibility of knowledge, a priori, within the area of that framework. The relativization is supposed to get around Kant's big problem, which was that he tethered his source of a priori knowledge to something that would provide Newton's laws and use Euclidean geometry. Since it is possible to switch between frameworks, and thus switch between L-rules, one can have sources of a priori knowledge, but relativized to a given framework. That said, I'm not sure I'm comfortable this idea of relativized a priori knowledge. Friedman says that it provides a foundation for knowledge, but it seems awfully mutable for something foundational. Presumably we do not change frameworks that often, since we are supposed to see them as akin to paradigms in science.

Friedman's picture is very much like Carnap. I'm not sure why he is saddling it with the Kantian stuff. He presents Carnap as being quite the neo-Kantian, although I don't think I get what that aspect adds to Carnap's view as presented. (I'm also somewhat troubled by the seemingly uncritical wholesale adoption of Kuhn's view of science. The presentation of Friedman's view of the a priori here seems to depend on Kuhn's picture of scientific development being right.)

The presentation of the relativized a priori was the positive part. The negative part consists of an argument against Quine's epistemological holism. The arguments main thrust is that holism can't account for the sort of dependence on, or presupposition of, different parts of math by different empirical laws. Friedman notes that Quine says that empirical content and evidence accrue to mathematical and logical statements as well as occurrence statements and empirical laws. Thus, the elements of the web of belief are all on an evidential (epistemic?) par, so we can treat the web as a big conjunction of statements. There are no asymmetric dependencies in a conjunction, so Quine's view can't account for the impossibility of, say, formulating Newtonian mechanics while rejecting (or merely lacking?) the presupposed math. In Friedman's words, "Newton's mechanics and gravitational physics are not happily viewed as symmetrically functioning elements of a larger conjunction: the former is rather a necessary part of the language or conceptual framework within which alone the latter makes empirical sense." Friedman actually gives this argument twice, once applied to Newtonian mechanics and once applied to relativity theory. The arguments are virtually the same and are on consecutive pages. (Maybe it worked better in lecture form.) Quine's view cannot account for the history of science, the parallel development of mathematical framework with empirical claims.

It seems like the Quinean could agree with Friedman. The view sketched flounders, but it is not Quine's view either. It seems to me something of a strawman. Despite discussing some of Quine's views on revising theory, Friedman concludes that Quine treats the elements of the web of belief, to continue using the metaphor, as parts of one big conjunction. This doesn't seem like Quine's view at all. It is important to Quine that certain things, such as math and logic, are less open to revision and rejection because they provide such utility and unificatory power. They are used in deriving consequences of theory and observation. It seems consistent with the Quinean view, I think because it was the Quinean view, that there be an asymmetry between the math that is needed in the framing of an empirical law and the empirical law. From what is presented, Friedman's objection doesn't cut against Quine.

In a way, this is too bad. The set up is perfect for really testing how Quine's epistemological holism could deal with some hard cases, i.e. detailed case studies from the history of science. Unfortunately, these are used to dismiss the weird conjunctive picture of Quine's holism, which is obviously bad. It would be informative to tell the Quinean story, in more detail, for what is going on with the dependence of Newton's mechanics and gravitation on the calculus. I'm not going to do that now though.

In all, the positive part of the lecture was more convincing, although it was still a bit lacking. There is a follow up essay entitled "The Relativized A Priori" that would probably clear things up. I'm not sure if I'm going to read it though. It depends how lecture three goes.

Thursday, September 11, 2008

Dynamics of reason

I started reading Michael Friedman's Dynamics of Reason. The book is broken into two parts, the original lectures that form the basis for the book and things that came out of discussion of the lectures. I suppose I will reserve judgment on whether to read the latter bits till I've gotten through the main lectures. Read more

The first lecture contained some brief history about how philosophers and scientists came to be two largely distinct groups. The philosophers of Descartes' time would, apparently, have thought it strange to be made to choose between camps. Following Newton, the two groups started to diverge although many were still interested in developments on both sides. Friedman goes on to tell on story on which the conceptual problems coming out of the sciences form the basis of philosophical debate. The first paradigm for this is Kant and his attempt to explain the possibility of Newtonian physics. Kant does this, roughly, by incorporating the basic concepts of Newtonian physics into the form of our intuition, the conceptual scheme with which we attempt to make sense of the world of appearance. This idea is continued later with Schlick trying to set up a similar foundation for relativity theory, except, rather than using an unchangeable, rigid idea like the Kantian form of intuition, he uses Poincare's conventionalism. This is prima facie less rigid.

The story continues forward to Kuhn's characterization of the development of science in terms of scientific revolutions. Friedman makes an interesting observation here. He claims that one of the reasons that Carnap was so enthusiastic about Kuhn's work, which was published in the Encyclopedia Carnap edited, was that he thought Kuhn's normal/revolutionary science distinction lined up with internal/external questions on his framework. Normal science is about developing science within a paradigm, a (more or less) fixed set of rules and ideas by which everyone operates. The framework itself is not in question. These are internal questions. Revolutionary science puts the framework itself in doubt and the search for a new framework begins. The motivation, usefulness and adequacy of different frameworks and theories is questioned. These are external questions, questions about which framework to adopt. This might be old hat, but it had never been clear to me why Carnap had said that he thought Kuhn's work lined up with his own. I was fairly clear on why, say, Hempel would like it. Putting things this way made the affinity with Carnap's views clearer, which was helpful.

Friedman's main complaint about Kuhn seems to be that Kuhn treats philosophy ahistorically, roughly the way that Kuhn accuses philosophers of having treated science. Friedman's project seems like it is to show how, when viewed more historically, developments in philosophy closely parallel developments in science and contribute to it during the revolutionary periods. Philosophy is supposed to provide new conceptual possibilities on which science can draw when the need arises. This provides the sketch of an answer to the problem with which the lecture opens, the relation of the sciences to philosophy.

The setup seems promising. The first lecture was a bit high altitude. There are lots of details that Friedman needs to supply here, like more examples of philosophers providing such conceptual possibilities, as opposed to mathematicians or, say, physicists. The historical narative is engaging, and I like the general motivation for studying the history of philosophy. It seems like it leaves a lot of philosophy in the lurch though. Friedman's ideas don't seem particularly applicable to ethics and aesthetics. I'm not sure what the story is supposed to be for the relation of philosophy to math since the latter is difficult to fit into the Kuhnian mould. Lastly, I'm not sure why we want philosophy to have that role. I suppose it would justify some bits of philosophy to some, but I don't yet have a clear idea of what course of development it would recommend for philosopy as a whole. It seems like it should say something normative.

Scattered notes on Wittgenstein

I'm reading through some of Wittgenstein's lectures on the foundations of mathematics. I'm not real sure what to expect. I thought I'd write up some notes on it as I went along. This is, in my opinion, the only way to read Wittgenstein. I figured I'd post them in case anyone can help shed some light on what is happening or is interested. Read more

Wittgenstein's approach to understanding in the first few lectures (I assume throughout) seems to be that to understand a concept is to be able to use it. Suppose that someone says that they understand the same concept I do and we use it in the same way in several cases but then diverge. Wittgenstein seems to think that this means that we understand it differently, because our uses of the concept diverge. At the start of lecture two, Wittgenstein distinguishes two criteria for understanding. One is where you respond to q question of understanding by responding, "Of course." If asked whether one understands "house",the response will be an affirmative. The other criteria is how the word is used, indicating houses, etc. Wittgenstein seems to cast some doubt on the first criterion of understanding. He seems to be unsure what justifies it except the second criterion.

This is probably related. He thinks that part of understanding what a mathematical discovery is consists in seeing the proof of it. He gives a sample exchange:"Suppose Professor Hardy came to me and said "Wittgenstein,I've made a great discovery. I've found that ..." I would say, "I am not a mathematician, and therefore I won't be surprised at what you say. For I cannot know what you mean until I know how you've found it." We have no right to be surprised at what he tells us. For although he speaks English, yet the meaning of what he says depends upon the calculations he has made." The proof of a mathematical claim isn't an application of the claim. It does demonstrate a use of the concepts involved, which use, I suppose, gives the meaning of the proposition. I should note that it is a little weird for the exchange to talk about a discovery, at least from Wittgenstein's perspective. Near the end of the first lecture he says that he will try to convince us that mathematical discoveries are better described as mathematical inventions.

Why would we need to see the proof of a statement to understand it? The proof provides an illustration of the use of the concepts involved. This begins to shed some light on why different proofs of one proposition are interesting. The proposition has been show to be true once demonstrated, assuming the proof is good, so additional confirmations of this aren't that exciting. What is useful is seeing different ways in which the concepts can be used together. This seems to result in understanding the proposition in different ways. Does it result in different meanings for the proposition? Not sure. It isn't clear what role, if any, the idea of meaning plays in these lectures.

In lecture four, Wittgenstein talks about people who use things that look like mathematical propositions without them being integrated into a wider mathematical context. The example is kind of weird. It is a group of people that measure things with rulers then measure to figure out how much something will weigh. The question is whether they are working with physical or mathematical propositions. Wittgenstein thinks "both" might be a reasonable answer, but he gives a follow-up suggestion that might indicate that this isn't his view. He says that there is a view that math consists of propositions and there is another view that it consists of calculations. It seems like the latter will be his view.

The first few lectures have been a bit difficult to pull together, although going forward a bit, I think I'm getting a better sense of some of the issues. More notes to follow I expect.

Wednesday, September 10, 2008

Systematicity

I'm reading a nice review of Hylton's Quine book on NDPR, and one line struck me. The review says, "Hylton's pivotal interpretative thesis is that Quine -- contrary to widespread opinions -- is basically a systematic philosopher." I found this somewhat surprising since it seems quite hard, to me, to view Quine as a non-systematic philosopher. This is elaborated in the review: "That means, according to Hylton, that his main purpose is constructive rather than negative." I don't think this ameliorates matters. I'm still surprised. Quine has his destructive/negative side, sure, but it is supplemented with an integrated, systematic (for lack of a better adjective) view of the world. One could see Quine as entirely negative if one stopped reading him at "Two Dogmas" but a glance through Word and Object should give hints of the systematic side. I am, consequently, surprised by the opening of the review. Who thought Quine was entirely negative and why?

There is a similarly surprising line, albeit to a lesser degree, in a review written by Fodor of a collection of Davidson's essays. I think it was in the London Review of Books. Fodor says something along the lines of: it turns out that Davidson's thought is fairly unified after all. Maybe it was harder to piece together going forward, when Davidson's work was scattered about a bunch of journals.

Saturday, September 06, 2008

A note on relevance logic

In the proof theory class I'm taking, Belnap introduced several different axiomatic systems, their natural deduction counterparts, and deduction theorems linking them. We started with the Heyting axiomatization for intuitionistic logic and the Fitch formulation of natural deduction for it.

The neat thing was the explanation of how to turn the intuitionistic natural deduction system into relevance logic. To do this, we add a set of indices to attach to formula. When formulas are assumed, they receive exactly one index (a set containing one index), which is not attached to any other formulas. The rule for →-In still discharges assumptions, but it is changed so that the set of indices attached to A→B is equal to the set attached to B minus the set attached to A, and A's index must be among B's indices. This enforces non-vacuous discharge. It also restricts what things can be discharged. The way it was glossed was that A must be used in the derivation of B.

From what I've said there isn't anyway for a set of indices to get bigger. The rule for →-Elim does just that. When B is obtained from A and A→B, B's indices will be equal to the union of A's and A→B's. This builds up indices on formulas in a derivation, creating a record of what was used to get what. Only the indices of formula used in an instance of →-Elim make it into the set of indices for the conclusion, so superfluous assumptions can't sneak in and appear to be relevant to a conclusion.

This doesn't on the face of it seem like a large change. Just the addition of some indices with a minor change to the assumption rule and the intro and elim rules. The rule for reiterating isn't changed; indices don't change for it. Reiterating a formula into a subproof puts it under the assumption the subproof, in the sense of appearing below it in the fitch proof, but not in the sense of dependence. The indices and the changes in the rules induce new structural restrictions, as others have noted. We haven't gotten to sequent calculi or display logic, so I'm not going to go into what the characterization of relevance logic would look like in those. Given my recent excursion into Howard-Curry land, I do want to mention what must be done to get relevance logic in &lambda- calculus. A restriction has to be placed on the abstraction rule, i.e. no vacuous abstractions are allowed. This is roughly what one would expect. Given the connection between conditionals being functions from proofs of their antecedents to proofs of their consequents and λ-abstraction forming functions, putting a restriction on the former should translate to a restriction on the latter, which it does.

Classical Howard-Curry

I've been reading Lectures on the Howard-Curry Isomorphism by Sørensen and Urzyczyn recently. I wanted to comment on one of the interesting things in it. Read more

Briefly, the Howard-Curry isomorphism is an isomorphism between proofs and λ-terms. In particular, it is well developed and explored for intuitionistic logic and typed λ calculi. It makes good sense of the BHK interpretation of intuitionistic logic. The intuitionistic conditional A→ B is understood of a way of transforming a proof of A into a proof of B. The λ-abstraction of a term of type B yields a function that, when given a term of type A, results in a term of type B. There is a nice connection that can be made explicit between intuitionistic logic and computability.

I'm not sure if I've read anyone using this to argue for intuitionistic logic over classical logic explicitly. Something like this is motivating Martin-Löf, I think. Classical logic doesn't have this nice correspondence. At least, this is what I had thought. One of the surprising things in this book is that it presents a developed correspondence between classical proofs and λ-terms that makes sense of the double negation elimination rule, which is rejected by intuitionistic logic.

Double negation elimination corresponds to what the book terms a control structure. I'm not entirely clear on what this is supposed to mean. It apparently is from programming language theory. It involves the addition to the lambda calculus of a bunch of "addresses" and an operator μ for binding them. It is a little opaque to me what these addresses are supposed to be. When thinking about them in terms of computers, which is their conceptual origin I expect, it makes some sense to think of them in terms of place in memory or some such. I'm not sure how one should think of them generally. (I'm not sure if this is the right way to think of them in this context either.) Anyway, these addresses, like the terms themselves, come with types and rules of application and abstraction. There are also rules given for the way the types of the addresses and the types of the terms interact that involve negations. To make this a little clearer, the rule for address abstraction is:
Γ, a:¬ σ |- M: ⊥, infer
Γ |- (μa: ¬ σ M): σ.
The rule for address application is:
Γ, a: ¬ σ |- M: σ, infer
Γ |- ([a]M): ⊥.
In the above, Γ is a set of terms, M is a term, and the things after the colons are the types of the terms. (Anyone know of a way to get the single turnstile in html?)

The upshot of this, I take it, is that we can make (constructive?) computational sense of classical logic, like intuitionistic, relevance, and linear logics. Not exactly like them, since the classical case requires the addition a second sort of thing, the addresses, in addition to the λ-terms and another operator besides. Assessing the philosophical worth of this depends on getting clear on what the addition of this second sort of thing amounts to. I can't reach any conclusions on the basis of what is given in the book. If the addresses are innocuous, then it seems like one could use this to construct an argument against some of Dummett's and Prawitz's views about meaning. This would proceed along the lines that, despite Dummett's and Prawitz's arguments to the contrary, we can make sense of the double negation elimination rule in terms of this particular correspondence. I don't have any more meat to put on that skeleton because I don't have a good sense of the details of their arguments, just rough characterizations of their conclusions.

There is also a brief discussion Kolmogorov's double negation embedding of classical logic into intuitionistic logic. This is cased out in computational terms. It's proved that if a term is derivable in the λμ calculus then its translation is derivable in the λ-calculus. One would expect this result since the translation shows that for anything provable in classical logic, its translation is provable in intuitionistic logic. It's filling in the arrows in the diagram.

One thing that seemed to be missing in this part of the book was a discussion of combinators. One can characterize intuitionistic logic in terms of combinators. A similar characterization can be done for relevance logic and linear logic. There wasn't any such characterization given for classical logic. Why would this be? The combinators correspond to certain λ-terms. Classical logic requires moving beyond the λ-calculus to the λμ calculus. The combinators either can't express what is going on with μ or such an expression hasn't been found yet, I expect. (Would another sort of combinator be needed to do this?) [Edit: As Noam notes in the comments, I was wrong on my conjecture. No new combinators are needed and apparently the old ones suffice.]

Friday, September 05, 2008

Russell on Hegel and math

Russell, in "Introduction to Mathematical Philosophy," says:
"It has been thought ever since the time of Leibniz that the differential and integral calculus required infinitesimal quantities. Mathematicians (especially Weierstrass) proved that this is an error; but errors incorporated, e.g. in what Hegel has to say about mathematics, die hard, and philosophers have tended to ignore the work of such men as Weierstrass."
The philosophy of math class I'm taking now will probably show me what the work of a mathematician like Weierstrass has to offer philosophers. (I don't really know.) However, this quote made me wonder what errors Russell had in mind. What errors were incorporated into what Hegel said about mathematics? Did Hegel talk about infinitesimals? Russell doesn't specify what the errors are, which is too bad. I hope he doesn't mean the alleged quip about the necessity of the number of planets being 9. Whether or not Hegel actually said that, it is a stretch to call that an error in mathematics.

Wednesday, September 03, 2008

Two questions

I thought I'd post a couple of questions, which I'm sort of looking into, while more substantive posts are still in development.

First question: Are there discussions of linear logic anywhere in the philosophical literature? There's a lot on relevance logic and a lot on intuitionistic logic. I'm not sure where to find stuff on linear logic. Failing that, are there any computer science articles that include philosophical discussions of it that go beyond "premises are like resources"? (That's just something I've seen a lot. It is a helpful though opaque metaphor.) I don't know that I have it in me to work through Girard's stuff yet.

Second question: Are there any philosophical books or articles that talk about formal language theory? (I mean more than just Turing machines.) Brandom's Locke lectures have a short discussion of it, mainly the Chomsky hierarchy, early on, but that falls by the wayside and is laden heavily with Brandom's project. I bet there's something neat of a philosophical bent somewhere in the computer science literature, but I have no clue.

Friday, August 29, 2008

Adaptive logic

A while ago, Ole mentioned a presentation on adaptive logic by some logicians from Ghent. It sounded pretty interesting. Apparently one of the Ghent logicians, Rafal Urbaniak, has started a blog, the first posts of which are on that very topic. Rafal was nice enough to link to a long introduction to adaptive logic. I haven't had the chance to go through it all yet, but it looks solid. How am I supposed to narrow my interests when neat stuff like this keeps popping up?

Saturday, August 23, 2008

From Logic and Structure

Amazon has been telling me repeatedly that a new edition of Dirk van Dalen's Logic and Structure is coming out soon, so I thought I'd look at an older version. I picked up a copy of the third edition. The opening of the preface is too good to pass up, if one ignores the run-on sentences:

"Logic appears in a 'sacred' and in a 'profane' form; the sacred form is dominant in proof theory, the profane form in model theory. The phenomenon is not unfamiliar, one observes this dichotomy also in other areas, e.g. set theory and recursion theory. Some early catastrophes such as the discovery of the set theoretical paradoxes or the definability paradoxes make us treat a subject for some time with the utmost awe and diffidence. Sooner or later, however, people start to treat the matter in a more free and easy way. Being raised in the 'sacred' tradition my first encounter with the profane tradition was something like a culture shock. .. In the course of time I have come to accept this viewpoint as the didactically sound one: before going into esoteric niceties one should develop a certain feeling for the subject and obtain a reasonable amount of plain working knowledge. For this reason this introductory text sets out in the profane vein and tends towards the sacred only at the end."

I don't have any comment on the book's contents as I haven't slogged through it. The brief chapter on second-order logic makes an interesting point though. It shows how all the connectives of classical logic can be defined using just → and ∀, although this does require both first- and second-order quantifiers. The book obscures this fact in the statement of the theorem. This isn't surprising once one sees the proof, but it is still neat.

Once more, again

The calm of summer is about to give way to the less calm start of the new year. Classes start on Monday. Posting has been a little slow because I've been plugging away at some things which have eaten into my blogging energy. I'm nearing completion on them though. Once classes start I should have some more things to talk about, so posting will, I hope, become regular again. I'm not teaching this year and I'm going to try to put the extra time to good use.

I have a good looking line up of classes. I'll be taking three. Belnap is teaching a proof theory class for which we'll be using Restall's book. I am, of course, looking forward to it. Gupta is teaching a seminar on truth. I'm not sure what we're reading. I think I remember hearing that the focus is on revision and fixed-point theories of truth, but I'll have a better idea soon. Wilson is teaching a seminar on the philosophy of math and it looks like we're going to be focusing on Russell, Cantor, Frege, and Dedekind, which should be interesting. With any luck I will be done with official class work by the end of the term and I'll have the glimmerings of a prospectus idea.

Wednesday, August 13, 2008

Some comments on incompatibility semantics

The first thing to note about the incompatibility semantics in the earlier post is that it is for a logic that is monotonic in side formulas, as well as in the antecedents of conditionals. (Is there a term for the latter? I.e. if p→q then p&r→q.) This is because of the way incompatiblity entailment is defined. If X entails Y, then ∩p∈YI(p) ⊆ I(X). This holds for all Z⊇X, i.e. ∩p∈YI(p) ⊆ I(Z). This wouldn't be all that interesting to note, since usually non-monotonicity is the interesting property, except that Brandom is big on material inference, which is non-montonic. The incompatibility semantics as given in the Locke Lectures is then not a semantics for material inference. This is not to say that it can't be augmented in some way to come up with an incompatibility semantics for a non-monotonic logic. There is a bit of a gap between the project in MIE and the incompatibility semantics. Read more

Since this semantics isn't for material inference, what is it good for? it is a semantics for classical propositional logic, but we already had one of those in the form of truth tables. Truth tables are fairly nice to work with and easy to get a handle on. One reason is that it validates the tautologies of classical logic without using truth. Unless one is an inferentialist this is probably not that exciting. It seems like it should lend some support to some of Brandom's claims in MIE but this depends on the sort of incompatibility used in the incompatibility semantics being the sort of thing an inferentialist can adopt. I'm not sure that incompatiblity as it is defined in MIE or AR is the same as this notion and so some further argument is needed to justify an inferentialist's use of this notion.

Incompatibility semantics has at least two generally interesting points I want to mention here. [Edit: This paragraph needed a longer incubation period. I've removed the point that was originally here that is not interesting and wrong in parts.] Also, it is possible that no set of sentences is coherent. As noted in the appendices, there could be a degenerate frame in which all the sentences are self-incoherent. There could also be incoherent atomic sentences.

The second is that it allows a definition of necessity that doesn't appeal to possible worlds or accessibility relations. The necessity defined is an S5 necessity. To get other modalities either some more structure will have to be thrown in, possibly an accessibility relation, or a different definition of necessity. In any case, a modal notion is definable using sets of sentences and sets of sets of sentences. This would be somewhat surprising if we didn't note that incompatibility itself is supposed to be a modal notion, so, in a way, it would be surprising if it were not possible to define necessity using it. That it is S5 is a bit surprising. This leads to some cryptic comments by Brandom about intrinsic logics, but I won't broach those in this post.

I'm not sure if this is interesting. One of the theorems proved in the appendices to the Locke Lectures is that when X and Y are finite, X |= Y is equivalent to a finite boolean combination of entailments with fewer logical connectives. The important clauses here are X |= Y, ¬ p iff X, p |= Y, and, X, ¬ p |= Y iff X |= Y, p. One can flip sentences back and forth from one side of the turnstile. I think there are a couple of things to check to make sure this works, but, modulo those, this is the same situation as for the proof theory of classical propositional logic. It is possible to define a one-sided sequent system for classical propositional logic, so it seems likely that we could define a monadic consequence relation, something along the lines of: an entailment X |= Y holds iff X*,Y is valid, where X* is the result of negating everything in X. I'm not sure if this is interesting because I'm not sure what, if any, advantage this would offer over the concept of consequence defined in the Locke Lectures. The one-sided sequent system yields a fast way to prove whether a given set of sentences is valid or not. It's not clear that there would be any gain on computing the incompatibilities to check whether a given set of sentences is incompatibility-valid or not in the monadic consequence. (This may be a really trivial point for any semantics for classical logic, but it isn't something I've thought about.)

Tuesday, August 12, 2008

La la la links

In lieu of commentary on incompatibility semantics today, here are a couple of worthwhile links.

The first is a tutorial on how to use Zorn's lemma by Tim Gowers. It is quite good and has several examples.

The second is a short piece on time management by Terry Tao. Most productivity stuff I read online is aimed more at the business or tech industry crowd, so I enjoyed reading suggestions by a successful academic aimed at the academic crowd.

Friday, August 08, 2008

Basics of incompatibility semantics

I've been spending some time learning the incompatibility semantics in the appendices to the fifth of Brandom's Locke Lectures. The book version of the lectures just came out but the text is still available on Brandom's website. I don't think the incompatibility semantics is that well known, so I'll present the basics. This will be a book report on the relevant appendices. A more original post will follow later. Read more

The project is motivated in Brandom's Locke Lectures. He does not want to take truth as a primitive notion since he doesn't want to start with notions regarded as representationalist. Rather, he opts for incompatibility, or incoherence. It is important that, to start with, incoherence is not formal incoherence. Atomic propositions taken together can be incoherent. Incoherence is linked to the notion of incompatibility by the following: for sets of sentences X,Y, X∪Y∈ Inc iff X∈ I(Y), where I is a function from a set of sentences to the set of sets of sentences it is incompatible with. From this definition it is immediate that X∈I(Y) iff Y∈I(X). It also turns out that given a language and an Inc property one can define a unique, minimal I and similarly for Inc given a language and an I function.

It is also taken as an axiom that if a set X is incoherent, then all sets Y∪X are also incoherent. The incoherence of a set of sentences can't be fixed by adding sentences to it.

Starting with an Inc property, logical connectives and a notion of entailment can be defined. These are more or less as would be expected from Making It Explicit and Articulating Reasons. The notion of entailment is one of incompatibility. X |= Y iff ∩p∈YI(p) ⊆ I(X). (I'm using the convention of dropping the brackets for singletons when it improves readability.) This definition says that X entails Y when everything incompatible with Y is incompatible with X. With this notion in mind, validity for a set X can be defined as: anything incompatible with everything in X is itself incoherent, which is equivalent to |= X. Negation is defined as: X∪{¬p}∈ Inc iff X |= p. Conjunction is defined as: X∪{p&q}∈ Inc iff X∪{p,q}∈ Inc. Disjunction and the conditional are defined from these in the standard ways.

It turns out that from these definitions the connectives behave classically. Disjunction distributes over conjunction. Double negations can be eliminated. The entailments work out as expected for conjunctions on the left and on the right, i.e. X, p&q |= Y iff X, p, q |= Y, and, X |= Y, p&q iff X |= Y, p and X |= Y, q. The left side of the entailment sign is conjunctive and the right side is disjunctive, e.g. X |= p, q iff X|= p∨q. Modus ponens is provable from these definitions.

The definition for necessity is a bit trickier. It is: X∪{□p}∈ Inc iff X∈ Inc or ∃Y(X∪Y∉ Inc and not: Y |= p). A necessary proposition, □p, is incompatible with a coherent set X iff there's some Y which is compatible with X and is compatible with something p isn't. Here is Brandom on the dual notion of possibility: "what is incompatible with ◊p is what is incompatible with everything compatible with something compatible with p." The semantics of modality without possible worlds involves looking at two sets of sentences (three counting {□p}) and their incompatibilities. The normal rule of necessitation falls out from the axioms and this definition. The modal logic that results from this together with the above definitions for negation and conjunction is classical S5.

I'll close with a brief comment. The definition of incoherence and incompatibility used has as a consequence that the consequences of an incoherent set is everything. The principle of explosion is built into the incompatibility semantics. The motivating idea is that an incoherent set of sentences will behave differently in inference, in particular by acting as a premiss for everything. This creates a problem, noted by Brandom, in dealing with relevance logics. Brandom sees the defining feature of relevance logic as the rejection of explosion which would mean that minimal logic would be a relevance logic. Incoherent sets of sentences behave just like coherent sets of sentences, unless one already has negation, in which case for some p an incoherent set would entail both p and ¬p. Part of the point of Brandom's project is that there is a coherent way to define logical vocabulary from a base language without any logical vocabulary so this is not an option. The possibility hinted at in the Locke Lectures is to define an absurdity constant and then have the incoherent sets imply that constant, but that has not yet been worked out.

I think working through this sheds some light on the otherwise cryptic comments about intrinsic logic that come up in lecture 5, but I'll save that, as well as my other commentary on this stuff, for another post.

Tuesday, August 05, 2008

On primary math education

Yesterday I was pointed to an essay, a lament, on primary school math education in the US, written by a K-12 math teacher, that I want to share. It is well-written and makes some good points about teaching, which activity puzzles me. It also contains some entertaining and interesting dialogues, a la Perry, Lakatos, Feyerabend and Berkeley.

Keith Devlin also has a couple of columns about conceptual understanding and why multiplication isn't repeated addition: here, here, and here. [Edit: Duck points out that there is a good discussion of some of Devlin's stuff at Good Math, Bad Math.]

Sunday, August 03, 2008

Short note on Wittgenstein and Church

In the Tractatus, Wittgenstein gives a definition of number as the exponent of an operation. In something I read recently, although for the life of me I cannot figure out what it was, the author pointed out that the basic idea in the Tractatus is the same as that of Church numerals. The definition of number in TLP has seemed somewhat obscure to me in the context of the TLP, so this comparison helped clarify things. The definition of number in TLP comes at 6.02. Lets call the basic operation S and name an element x. The number 0 is x, 1 is Sx, n is Snx and n+1 is SSnx. While not in lambda notation, this is fairly close to Church's definition for addition.

Together with some of Michael Kremer's remarks on Tractarian views of math, it makes the TLP seem concerned with computation as opposed to just structural concerns, which one would expect of something in the broadly logicist vein. (This may not be fair to all logicists. There did not seem to be a comparable concern with computation in Frege or Russell. The distinction I'm using here is the one drawn by Jeremy Avigad between math as a theory of structure and of computation in his "Response to Questionnaire" on his website. Although, Avigad points out that before the 20th century mathematicians were concerned with computational aspects of proof more than structural.) In Russell's preface to the TLP, he criticizes Wittgenstein's definition of number for not being able to handle transfinite numbers. If computation is supposed to be an important theme in the TLP, then this would not be bad. The transfinite numbers would not be the sort of thing that we would be computing with recursively. An interesting historical question is whether there were any reviews written of Church's work which pointed out that his definition only worked for finite numbers, echoing Russell's criticism of the TLP. Since the TLP was written before Church's, Turing's or Goedel's work on computability made it a more precise mathematical notion, it seems likely that it would remain implicit in the book rather than being made explicit as a main theme.

Sunday, July 27, 2008

Resume blogging

I'm back from my travels. I'm going to try to get back into blogging in the next few days. To help ease my way back into writing things, I'll start with a link. Fellow Pitt grad student Justin Sytsma has started blogging at My Mind Is Made Up. It looks like he's going to be talking about experimental philosophy and some of his work in it.

Saturday, July 12, 2008

Yet another brief hiatus

I am in Argentina at the moment doing some traveling. I had an ambitious plan to write some posts and do a lot of philosophy during my stay in Argentina. While I am drinking the requisite amount of coffee for such an endeavor, I haven't really been able to write anything lately. Posting should resume when I get back to the States in about two weeks.

Tuesday, July 01, 2008

Honest toil loses again

I've made some recent additions to the blogroll, most of which probably should've been added long ago. Colin at Inconsistent Thoughts has been writing some interesting things about logical matters. Aaron at Conundrum has been posting some neat things about truth. Andrew at Possibly Philosophy has up several good posts on causation, among other things. Finally, fellow Pitt grad student Bryan has been posting a lot about the philosophy of physics at Soul Physics, in addition to a long list of links to YouTube clips of philosophers.

Wednesday, June 25, 2008

More notes on MacFarlane

MacFarlane's project in his dissertation requires that he make sense of quantifiers in terms of his presemantics. The initial suggestion is to assign quantifiers the type ((O => V) => V), where O is the basic object type and V is the truth value type. Two problems arise for this. The quantifiers could receive interpretations that are sensitive to the domain of objects. Regardless the quantifiers receive different interpretations as the domains vary. This leads to the second problem. What do the variable domains represent? Why do we use them? MacFarlane follows Etchemendy here. Etchemendy says that there are two competing ways of understanding the variable domains, neither of which is satisfactorily captured by variable domains. The first is understanding the variable domains as representing the things that exist at each possible world, with models representing worlds. Three objections to this are given, only two of which I will mention. One is that it seems to make the strong metaphysical claim that for any set of objects at all, the world could have contained just those objects. There might be ways to respond to this; MacFarlane cites a couple of attempts, one of which appeals to "subworlds." The other objection, which seems promising, is that this is hard to square this with the use of frames in modal logic. If the various domains are parts of worlds in different frames, then we must make sense of ways the very structure of possibility could have been, in MacFarlane's phrase. This seems like a problem. I think some people have objected along these lines to David Lewis's modal realism. Making sense of the moving parts of modal logic is hard. Read more

The other way of understanding variable domains is as picking out different meanings for terms and quantifiers. This gets around some of the problems with the possible worlds understanding. It runs into a problem with cross-term restrictions, restrictions put on one class of terms by another class. It becomes unclear on this understanding why the same domain is used for both the universal and existential quantifier. It seems like one could stipulate the usual interdefinability. Etchemendy's point leads to the question of why the same domain is used for singular terms as well as the quantifiers. It isn't clear what a principled response to this would be. MacFarlane and Etchemendy both seem to find it decisive.

In response to these worries, MacFarlane suggests that the proper way of understanding the variable domains is not either of these two. Rather, he thinks that it is as "a specification of a presemantic type: the type O, from which semantic values for singular terms is to be drawn in an interpretation." (p. 199) Using variable domains is just using different basic presemantic types. Before proceeding to the point that I really wanted to focus on, I want to comment on this. This is a better explanation of the variable domains if we have a good grip on what a presemantic type is. I'm not sure that I've enough of a grip on it that it would explain variable domains. The types are sets of things (or functions, constant functions or not) and functions on those (or sets of those). Does this really explain or give us a better understanding of the use of variable quantifier domains? It seems like we would want to appeal to the same intuitions used in variable domains, i.e. the possible meanings and possible worlds intuitions above, for the basic types. I'm not sure how the types fair with respect to the objections to the possible worlds view. It seems to get around the objections to the possible meanings view since all the types are defined with respect to the basic types and so build in the cross-term restrictions.

MacFarlane continues by saying that the basic types are indexical. That is, the basic types in the presemantic ontology are functions from contexts (or points of evaluation; MacFarlane does not use this term.) to domains or types. This is clearly based on work in the philosophy of language, such as Kaplan's work. The basic type O is relative to an index, namely the sortal or set of sortals that specify object in a given interpretation. The idea, from Brandom, being that "object" and "thing" are presortals, which depend on context for complete specification since they do not carry with them the criteria of individuation that other sortals carry with them. I think this is something that he picks up from Quine, Evans, Strawson, Gupta, and others. The understanding of variable domains then depends on this view about sortals. Granted, MacFarlane points out that treating all the basic types as indexical in this way results in a simpler presemantic theory. Other logics can vary the type V of sentential values, say, assigning different sets of propositions in different settings.

Later on MacFarlane says that semantics should answer to postsemantics, which is rooted squarely in the notions of assertion and inference, again notions from the philosophy of language. MacFarlane suggests that a coherent, useful philosophy of logic should be rooted in other philosophical views, in particular from the philosophy of language. It might be useful to extend this to the philosophy of mind since logical notions on MacFarlane's view are supposed to be normative for thought as such. I'm not sure how one would not hae to engage in some philosophy of mind with a claim like that.

This idea strikes me as quite sensible but I want to register a concern. If the justification for some views in the philosophy of logic come from some particular views in philosophy of language, then there is a worry about circularity arising when logical views are used to adjudicate disputes in the philosophy of language. Quine comes to mind as someone whose logical views figured prominently in his views on language. It is not necessary that a circularity arise; the justifying views in the philosophy of language may not be the ones that the particular philosophy of logic is being used to defend. Of course, this sort of dependence might not strike one as bad at all if one adopts a more coherentist outlook on things. It had surprised me since I had thought that logic and by extension the philosophy of logic were more foundational. As such, this sort of dependence on other areas of philosophy did not arise. If the foundational aspirations for the philosophy of logic are abandoned, then this is even less of a problem. This requires allowing more possible answers to the question: what can justify a view about the nature of logic? Possible answers to this question, both historically and in MacFarlane's work seem rather restricted though so there can't be that much widening. One could respond that while logic might play some foundational role, the philosophy of logic need not, and so justification in that domain can come from a much wider range of sources, even though it has not historically. To abuse a metaphor, while logic is near the center of one's web of belief, the philosophy of logic stands farther out. I'm doubtful this can be right since one, especially post-Tarskian philosophers, would expect the philosophy of logic to answer, or address, the demarcation question: what is a logical constant? The answer to this changes what the logic at the core of that web looks like, and by continued abuse of metaphor, how large chunks of the rest of the web look. This suggests a bit more of a foundational role for the philosophy of logic.