Sunday, April 29, 2007

Neat little problem

One of the UT grad students gave me this little logic problem after the conference, which was delightful. Apparently it comes from Dummett's Elements of Intuitionism. The problem is to prove that in intuitionistic logic you can get an infinite number of non-equivalent sentences given just a single propositional variable. I will post a solution in a few days if there isn't one in the comments before then.

Friday, April 20, 2007

Easy to say, hard to do

The other day I picked up a copy of Foot's Natural Goodness, a book that was lying out on a table in the reading room. I flipped through the intro and came across a great anecdote (on the first page as it turns out). Foot was telling an anecdote about one of the few times Wittgenstein went to a public lecture at the university he was associated with:
"Wittgenstein interrupted a speaker who had realized that he was about to say something that, although it seemed compelling, was clearly ridiculous, and was trying (as we all do in such circumstances) to say something sensible instead. "No," said Wittgenstein. "Say what you want to say. Be crude and then we shall get on." The suggestion that in doing philosophy one should not try to banish or tidy up a ludicrously crude but troubling thought, but rather give it its day, its week, in court, seems to me very helpful. It chimes of course with Wittgenstein's idea that in philosophy it is very difficult to work as slowly as one should."

The moral of the story is really what I thought was good. It is pretty hard to go as slow as one should. This is probably my biggest stumbling block. There tends to be a bit of a pseudo-Humpty Dumpty idea behind my rushing through things: I know what I mean so why explain? But, this just does not fly as a grad student. Alas.

Gentzen and Kripke and a coincidence?

I noticed a little symmetry recently and I'm not sure how to explain it yet. The symmetry is between intuitionistic and classical logic. In the Gentzen presentation for classical logic, you have \Gamma => \Delta, where \Delta and \Gamma are multisets. The inference rules for classical and intuitionstic logic are the same with one exception. The rules of intuitionistic logic restrict \Delta to singleton sets. So, proof theoretically (in Gentzen) we have (potentially) several formulas in conclusions for classical logic and only one in intuitionistic logic. Switch gears to semantics. Suppose we have a Kripke tree (W, ≤) for intuitionistic logic. This gives the semantics for intuitionistic logic. Restrict the set of worlds W to a singleton and it becomes classical semantics. So, semantically (in Kripke) we have several worlds for intuitionistic logic and only one world for classical. (You get the same thing if ≤ is an equivalence relation, but this just makes W behave as if it had only one world.)

Why would this be? In proof theory, allowing multiple conclusions lets you move formulas back and forth at will from antecedent to consequent, e.g. p=>p,\bot goes to =>p, ~p which goes to =>pv~p. Restricting to one conclusion means you can't derive the law of excluded middle for arbitrary formulas like that, since the first step is blocked. In the semantics, you get ~p only if all the worlds ≤ the world of evaluation do not verify p. This means that you can have a world w that doesn't verify p but also doesn't verify ~p since some w'≥w does verify p. Restricting to one world eliminates this wiggle room, since there will not be any such w'≥w if w is the only world. If w verifies p, then p. If w doesn't verify p, then ~p. Classical. Why there should be any sort of symmetry I don't know.

Tuesday, April 17, 2007

Career advice from mathematicians

Terence Tao, recent winner of the Fields medal, has posted some career advice to aspiring mathematicians here. Although it is aimed at mathematicians, it seems like good advice for aspiring philosophers and logicians. I should hasten to add that I'm not trying to give anyone career advice since (1) that would be ludicrous as I'm a graduate student and relatedly (2) that would be the height of hubris. I've just found that mathematicians say good things about writing and working, e.g. Halmos and Tao.

Sunday, April 15, 2007

Hypergaming

I had a short chat with one of the prospective Ph.D. students recently in which he told me about the hypergame paradox. Suppose we have a set of finite games. Finite games are games which must end in a finite number of turns. What are games? The guy said that it was just the intuitive notion of a game. I didn't get more detail so I won't give more. Hypergame is a game in which the only move available is to pick a finite game, then that game is played to completion, at which point hypergame is over. The question is: is hypergame a finite game? Suppose it is, then for each move pick to play hypergame. It never ends, so it isn't finite. Suppose it isn't. Then pick any game from the set of finite games, and that will end in a finite number of steps, and so will hypergame. Therefore it is finite. Boom. Paradox.

I was trying to figure out where the paradox comes from. At first I thought it was because hypergame was impredicative. This was probably because I've been thinking about Russell, his paradox, and such a lot lately. I couldn't really formulate how this was supposed to work though. I eventually came around to thinking that there was probably a way of reducing this to the halting problem. The idea is this. Hypergame is a Turing machine that tells you whether other Turing machines (games) halt (end in a finite number of steps). As Turing taught us, such a machine cannot exist on pain of paradox. All that is needed to get this to work is a precise way of correlating games with Turing machines. Unfortunately, I didn't get enough details for this. No idea what exactly constitutes a game, so no idea of how to line up games with Turing machines in anything more than a handwavy, metaphorical way. Alas!

Understanding names

In "Kripke, the Necessary Aposteriori, and the Two-Dimensionalist Heresy," Scott Soames argues that the two-dimensionalist (a la Chalmers and Jackson) response to Kripke's arguments for necessary a posteriori knowledge in Naming and Necessity do not work. I'm not going to go into the details of the article in this post, just point out something that kind of puzzles me. In the article Soames says various things about how speakers might understand a sentence or a name without knowing or believing certain things about the sentence or the name. For example, he says that "Peter Hempel lived on Lake Lane" and "Carl Hempel lived on Lake Lane" mean the same thing "even though speakers who understand them may not realize they do". I'm not sure if I've seen "understand" in the philosophical literature on semantics much. (Maybe I have and it has slipped my memory...) I'm not sure what exactly Soames means by "understanding". In the articles on propositional attitude reports I remember, everything is framed in terms of belief and propositions. In Lexical Competence by Diego Marconi, there is a delightful discussion of understanding and its degrees, but I am assuming that that is not what Soames has in mind. When one understands a sentence with a name in it, on the direct reference picture, what does one understand about the name? There is nothing to a name apart from its referent. But, if I don't realize that Carl and Peter Hempel are the same person, then it seems like I do not understand the name. I suppose I understand that they are both names, but that isn't understanding the sentences and the names; that's just understanding how constituents function in syntax. I imagine if one has access to Perry's theory of mental folders and roles that understanding has a natural home. One just starts playing information games with the names. But, that route isn't available. Direct reference about names is by itself rather austere, so understanding doesn't seem to happily fit into the picture there. It is important in setting up Soames's arguments that the agents understand the sentences and names, but it is difficult to see what that comes to.

Tuesday, April 10, 2007

More links

My friend Lindsay from California, who has accepted an offer from the philosophy PhD program at USC, has recently started a blog, first-order blogic. He is part of the reason I'm into Wittgenstein and was around for much of my formative venturing into the philosophy of language. All in all a delightful person.

Sunday, April 08, 2007

The Feynman Method, and some excuses

I haven't been posting much lately it seems. Things have conspired to keep me busy enough that I have trouble finding time to post. The new prospective students came and that took up much more time than I expected. I'm curious which of them will accept the offer from PItt. It seems like it will be a good group for next year. Term papers are also starting to loom, so posting might continue to be somewhat rarefied for a few weeks. Who knows. I might end up posting more as a method of structured procrastination. That is actually fairly likely. One of my papers got accepted for the UT Austin grad student conference (woo!), so I will head down there in a few weeks. I'll get to meet Aidan and see a few people I met while visiting places last year. My paper is on speech acts (history thereof) and situation semantics (application thereof).

Instead of coming up with a substantive post, I thought I'd put up a good quote from a lecture by mathematician Gian-Carlo Rota (found via n-Category Cafe I believe) on how to be a genius:
"Richard Feynman was fond of giving the following advice on how to be a genius. You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will
be a hit, and people will say, “How did he do it? He must be a genius!”"

Saturday, April 07, 2007

Brilliant editorial move

The second edition of Arthur Prior's Papers on Time and Tense features what is possibly the best editorial decision I've seen in a while. The editors decided that since infix (Russellian, in their terminology) notation won out, they would change all of Prior's formulas from their original Polish notation to infix notation. Polish notation is alright to read once you get used to it, but parentheses does make parsing easier. And, Polish notation is a huge pain when the formulas stretch half the width of the page, as they do for some of the modal axioms. Also, I can't handle proofs of any complexity in Polish notation. Good call editors!

Wednesday, April 04, 2007

n-Trick Pony, for small n

I read an interview with a mathematician recently in which the mathematician said something kind of interesting. He said that each mathematician only has a few tricks he uses in proofs. Even the most productive rely on a few tricks to generate a lot of results. Even Erdos, even Hilbert, got most of their original results using a few basic tools. This got me wondering how well this holds for philosophers. Do most philosophers rely on just a few sorts of arguments? That probably holds for most philosophers. In reading some things by Korsgaard, I realized what one of her tricks is. This form of argument is repeated a few times. X is a conception of normativity/practical reasons/etc. that has as a consequence that some norm/reason/rule/etc. cannot be violated. But, if it cannot in principle be violated, it cannot in principle be followed either. Therefore it isn't normative, and so not a norm/reason/rule/etc. (This should seem Wittgensteinian since she attributes a version of it to Wittgenstein in the form of his private language argument.)

Sunday, April 01, 2007

Another conception of philosophy

This one is from Bernard Williams in his "Philosophy as a Humanistic Discipline":
"What I have to say, since it is itself a piece of philosophy, is an example of what I take philosophy to be, part of a more general attempt to make the best sense of our life, and so of our intellectual activities, in the situation in which we find ourselves."

I found this from an interesting post at n-Category Cafe on (among other things) Bernard Williams, the role of philosophy in history of philosophy, and their relation to the philosophy of math.