Tuesday, January 30, 2007

My mind is blown

This is probably evidence of how little it takes to surprise me. Today I found out that the later Russell was committed to the view that one could quantify over truth-functional connectives. So, one could infer from p&q to (\exists C)pCq for the appropriate types of &. This probably shouldn't be surprising since he thought you could quantify over functions of the various types (ramified theory era) and he viewed the connectives as propositional functions. Since the connectives are just some propositional functions among others, they will be included in the various levels of the order (type? I'm getting mixed up) hierarchy. Russell's type theory is also committed to the connectives being stratified, since you can't have a lower order connective connecting higher order propositions.

Among other things that should have probably come to my attention before: Ruth Millikan has a book called "White Queen Psychology and Other Essays for Alice". I must say, that is a fantastic title. This is something that should've been apparent earlier since last semester I read an essay by McDowell in which he quoted large parts of a Millikan essay called "White Queen Psychology". I remember seeing that title and being distinctly puzzled, cause all that came to mind was the X-men villain, which really made no sense.

Tuesday, January 23, 2007

Carroll on the relation between conceivability and possibility

In Wittgenstein, Tom Ricketts said something that reminded me that I need to write up a few more posts on Through the Looking-Glass. I'll quote the important bit:
"'You needn't say "exactly,"' the Queen remarked. 'I can believe it without that. Now I'll give you something to believe. I'm just one hundred and one, five months and a day.'
'I can't believe that!' said Alice.
'Can't you?' the Queen said in a pitying tone. 'Try again: draw a long breath, and shut your eyes.'
Alice laughed. 'There's no use trying,' she said, 'one can't believe impossible things.'
'I daresay you haven't had much practice,' said the Queen. 'When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast. There goes the shawl again!'"

If we grant that believability is sufficient for conceivability (which strikes me as prima facie plausible), then Carroll endorses the idea that conceivability does not entail possibility. However, we can still conceive of and believe in impossible worlds (although Carroll doesn't go in for worlds talk). Don't think we can? The White Queen replies: just try harder; maybe shut your eyes and breath deeply. It seems to me, (and this may get overly geeky), that Carroll would further reply to someone who says that such and such a scenario is inconceivable, in much the same way as Inigo Montoya: "You keep using that word. I do not think it means what you think it means."

Monday, January 22, 2007

Explanation as orgasm

I just saw the greatest article title. It is in vol. 8 no. 1 of Minds and Machines. The article is "Explanation as orgasm" by Alison Gopnik, a great developmental psychologist at Berkeley. To copy her phrasing, her idea is that explanations are to understanding as orgasms are to reproduction. On her idea, phenomenologically, we make up theories and use our understanding to explain and we try reproduce to have orgasms. Evolutionarily, the order of explanation is reversed; on each side of the analogy, we do the former to acheive the latter.

I'm not going to go into the merits of the article cause I'm not going to read it (procrastination: woo!) but the title was too good to pass up a brief look. That and writing posts without content is too easy as procrastination to pass up once in a while (or when paper deadlines are looming).

Sunday, January 21, 2007

Russell on Locke

From Russell's 1905 "The Nature of Truth":
"A muddled form of this definition is to be found in Locke, whose claim to be regarded as a philosopher seems to be derived from his having put together all the mistakes that unphilosophical people are prone to commit."
Ouch. Some might call this uncharitable history of philosophy.

Saturday, January 20, 2007

Rational argumentation

In the opening page of The Moral Problem, Michael Smith says that rational argumentation aims at truth. This strikes me as rather strong and different than I would have expected. It seems like rational argumentation aims at something more like consistency. By arguing with someone, they try to ferret out what commitments you have that are incompatible. In response, you change or drop various views you had. Or, it is involved with teasing out the consequences of various positions. By coming at view from different angles, you work out what one is committing oneself to and why by adopting that view (or has committed oneself to if the view has been adopted). But this isn't aiming at truth yet, at least not without some supplementation. It seems like a strong view because rational argumentation seems like it should proceed roughly in a manner that could be presented in a formal system (although it will be conducted in natural languages with some extra ad hominem stuff through in in most cases). That is to say that it should be possible to make explicit the important assumptions and argumentative structure, at least in retrospect, and make clear what sorts of inferences were made when and why. But, inference (on one way of looking at it) only preserves truth. (This is the way that Dummett thinks is a retrograde move by Frege.) It doesn't generate truths from falsehoods with any regularity. In order to aim at truth, rational argumentation would need some way of identifying which premises were the false ones and it would need a way of coming up with truths to replace the falsehoods. The former seems somewhat reasonable in some cases. By working over a view with someone with different information, one might be able to surmise that some assumption or other was likely (in some cases, definitely) false (although Duhemian holism is righty looming, to borrow a phrase from Smith). The latter seems somewhat outside the domain of just rational argumentation. Rational inquiry, broadly interpreted to include scientific experimentation, perception, exploration, etc., could plausibly be held to do this, but just argumentation? I'm not quite enough of a rationalist to get on board with that.

Wednesday, January 17, 2007

Not too far off the mark, or Tonkless

While I joked in the last post that the target audience of Admissibility of Logical Inference Rules was not philosophers because there wasn't enough prose, I found a more serious reason to think this. Apart from the bit in the introduction about being for computer science students, there is no entry for 'tonk' in the index and neither Prior's paper nor Belnap's response are in the bibliography. Tragic!

Tuesday, January 16, 2007

A note on my inference project

Earlier today I was at the library trying to find Structural Proof Theory, a book that Ole recommended, and I came across a book that looks like it will be useful for my side project on inference. It is Admissibility of Logical Inference Rules by Rybakov. It is not aimed at philosophers; not enough prose. (That was a joke.) It looks like it is a very technical discussion of the logic and math behind admissible inference rules. It has long sections on various flavors of propositional and modal logics as well as some stuff on first-order logics I didn't get to look at yet. I think it will flesh out some of the logical background on inference rules that I've been looking for.

Admissible and derivable rules were very briefly mentioned in my proof theory class last semester; no details. Last week in modal logic, Belnap pointed out that the connection between derivable rules and conditionals. With a derivable rule, you can always prove the conditional consisting of the conjunction of the premises as the antecedent and the conclusion as consequent. (This is for natural deduction style rules and propositional logic.) For admissible, non-derivable rules this is not the case. This is clear from the definition of admissible and derivable rules. For formulas A_i, the inference rule A_1,...,A_n/B is admissible iff all instances of the rule with substitutions of formulas for atomic propositions in the A_i,B are such that if the substituted in A_i hold, then the substituted in B holds too. A rule A_1,...,A_n/B is derivable if A_1,...,A_n|-B. Applying the deduction theorem gets the conditional in question.

To praise the book with faint damning, it has three problems. There are a fair few typos (based on my brief skimming; I've found a few so I'm guessing there are more), the index is awful, and it is too expensive to buy. That being said, it looks pretty useful.

Monday, January 15, 2007

On the off chance that someone would be interested...

The philosophy departments at CMU and Pitt are sponsoring a graduate student conference and the deadline for papers has been extended to 2/1. Here is the blurb from the email:
"The conference will be held on March 24th 2007 in Pittsburgh, PA. Our keynote speaker will be Michael Strevens, and faculty lectures will also be given by Robert Brandom and Clark Glymour.

Submissions pertaining to the conference theme of "Understanding and Explanation" are heartily encouraged. Please see the attached flyer for details. Or, visit our website for more information. Limited travel funds will be available to qualifying graduate students."

Sunday, January 14, 2007

Meaning is use

Although the idea that meaning is use is often attributed to him, Wittgenstein never said that meaning is use. The closest he seems to come to saying that (based on my cursory knowledge of Wittgenstein, reading other things and poking around the internet) is from PI 43: "For a large class of cases — though not for all — in which we employ the word ‘meaning’ it can be defined thus: the meaning of a word is its use in the language." Wittgenstein here denies the idea that the meaning of all words can be given in terms of their use. He isn't quite nice enough to say which class of words has meanings that cannot be defined in terms of use. Most philosophers that apply the 'meaning is use' slogan to the whole of semantics seem to think that all words can be explained in that way. Is this a case where people were just inspired to grander things by a qualified statement? Or, are there reasons to think that Wittgenstein, by his own lights should have dropped the qualification and said that all meaning is so explainable?

Thursday, January 11, 2007

Wittgenstein on clarity

Since I'm taking a class on the early Wittgenstein, I'll probably have a few posts this semester on issues from the Tractatus. I figure I'd kick things off with a few thoughts on an issue that I didn't think I'd find in the Tractatus. Wittgenstein had some thoughts on clarity. Amazingly, the Routledge edition of the Ogden translation has an index entry for clarity. Proposition 4.1 says "A proposition presents the existence and non-existence of atomic facts." From this, he goes on to comment at 4.112 "The object of philosophy is the logical clarification of thoughts. Philosophy is not a theory but an activity. A philosophical work consists essentially of elucidations. The result of philosophy is not a number of 'philosophical propositions', but to make propositions clear. Philosophy should make clear and delimit sharply the thoughts which otherwise are, as it were, opaque and blurred." This is getting on the train of thought that I'm interested in. It is made explicit in 4.116 "Everything that can be thought at all can be thought clearly. Everything that can be said can be said clearly."

The important difference between this Wittgenstein and Rorty probably comes down to what is meant by 'clearly'. When Wittgenstein says anything that can be said or thought can be done so clearly, who does he think can say or think these things clearly? Is it the person who initially says or thinks them? That is somewhat dubious since there are certainly things that one could think without quite getting clear on what exactly they are getting on. But, maybe he means that someone, some sympathetic interpreter, can put the thoughts clearly. To hop between topics somewhat, this seems to be part of the project of Brandom's Making It Explicit, Articulating Reasons, and Tales of the Mighty Dead. It is part of the project that interpreters make explicit collateral beliefs and commitments of whomever is being interpreted so that the trains of inference can be fairly assessed and criticized. The basic premise of 4.116 seems to be in the background of Brandom's approach. Whatever can be thought can be made explicit, and, through the supplementation of collateral commitments made clear. But that is sort of an aside. I find 4.116 fairly plausible, although that belief comes and goes depending on what I've been working on.

The remarks in 4.112 are also kind of appealing. Although I'm not sure whether we should take the elucidations and clarity to be analysis in the sense of conceptual analysis. Now, in what sense does philosophy make other propositions clearer? is the job to make clear the dialectical structure used? This would restrict the application of philosophy to things that have an argumentative structure. Maybe the job is to try to draw out the conceptual relations between ideas that are used in various disciplines or texts. That sounds somewhat like traditional philosophical projects. Wittgenstein seems to want to draw a distinction between elucidations and philosophical propositions, but if the job of philosophy is just to produce elucidations, then it would seem like elucidations are philosophical propositions. It seems reasonable to take him to mean that philosophy does not result in traditional philosophical propositions, rather explanations. This is somewhat attractive, although I think I disagree with it. There seems to be some substantive philosophical propositions out there, e.g. Dummett's anti-realism, which are not merely explanatory, but I imagine that Tractatus Wittgenstein would say those are nonsense.

A note on Kantian inference

At the start of the Analytic, Kant says that in general logic, reason is the division of the higher faculties of cognition that deals with inference. This does not carry over to transcendental logic since the transcendental use of reason is not valid according to Kant. I'm not sure how this plays out, but it is worth noting that Kant ties the operation of reason to making inferences. I'd like to know more of what Kant says on inference, but there doesn't seem to be any one place in the Critique (or the Logic or the Prologomena) that he discusses inference in detail.

Naturally deducing, another look

Prawitz proves what he calls the normal deduction theorem for natural deduction, in minimal, intuitionistic, and classical flavors. Gentzen proved cut elimination (or Hauptsatz, as he called it) for his sequent calculus, in all three flavors as well. The two theorems do the same things. They guarantee that deductions have a certain nice form and nice properties, including the subformula property. The two theorems end up being equivalent in a sense. (I'm not sure how to prove their equivalence though). We do know that if A is deducible from \Gamma in natural deduction, then there is a sequent calculus deduction of it, and conversely. The proof of this provides a way of taking a proof in natural deduction or the sequent calculus and getting a proof of the other. There are two kinds of natural deduction systems that could be used here though. One is Prawitz's natural deduction which treats each node as a formula. The other is the one we used in the class i took which treats each node as a sequent with open hypotheses made explicit to the left of the arrow. (This isn't a sequent calculus though, since the sequent calculus only has intro rules.) The proof I've seen shows how to take a proof in natural deduction in the second sense to a sequent calculus proof, and vice versa. This somewhat foreshadows Hjortland's point.

The sequent calculus has structural rules (weakening, e.g.). Natural deduction (in Prawitz's sense) does not. It seems like we lose explicit information by doing proofs in natural deduction rather than sequent calculus. As Hjortland points out however, if we look carefully, they are still there. Prawitz defines a discharge function which takes care of which hypotheses are discharged where. The structural rules are realized in natural deduction by what sort of discharge functions are allowed. This is somewhat to be expected since we know there is a correspondence between natural deduction and sequent calculus derivations. To take this a step further, it seems like taking Hjortland's point seriously, there should be a proof that constructively transforms ND proofs (in Prawitz's sense) into sequent calculus proofs and vice versa that keeps the use of structural rules explicit.

Tuesday, January 09, 2007

Dummett on modality

[A word of warning, this is not the clearest post.] In a paper called "Could there be Unicorns?" Dummett goes into Kripke's argument in Naming and Necessity that there could not be unicorns. Dummett presents three ways of understanding Kripke's argument and on two of those ways, the argument doesn't work. I didn't understand that part of the paper (nor do I remember it well), so I won't talk about that. In the conclusion Dummett doesn't sound terribly convinced of it either. What is more interesting is some of the background discussion. First, Dummett talks about the revival of modality among philosophers. As the story goes, modal concepts like possible worlds fell into ill-repute among philosophers in the first half of the 20th century, due in large part to arguments by Quine. Modal notions were "spooky" intensional notions and so modal logic was a "spooky" intensional logic. The only good notion was an extensional one, so modal notions were right out. Modal logic and possible worlds were revived, and made palatable, in the 60s and 70s by Kripke's giving a completeness proof and a semantics for modal logic in addition to his Naming and Necessity lectures. Dummett thinks that this story does not get things right. Whatever the reasons for modal logic becoming popular among philosophers again, he thinks that Kripke's work is not relevant. One of the big advances was that the Leibnizian idea of possible worlds could be extended to weaker logics by relativizing the accessibility. Also, Kripke's big proof was for K. However, K is not the logic that most philosophers use. They use an unrestricted accessibility relation, which is what is used in S5. This leads Dummett to say that philosophers don't put any structure into their accessibility relation (which is sort of odd since it is fairly structured, in an equivalence relation kind of way). I think what he means is made clearer when he says that in S5, there is nothing special about the actual world. In general in K or S4, the actual world is special in virtue of what is accessible from it and what is accessible to. At least, that is what I think he means. So, since Kripke's work didn't do anything to make S5 any clearer than it already was, philosophers shouldn't use it as a justification for using possible worlds talk in the sense of S5. This is not to say that S5 is obscure. Dummett seems to think that Leibniz and others made unrestricted modality and possible worlds somewhat respectable, although not completely clear.

The other bit of background that was neat was Dummett's attempt to make sense of the relative accessibility relation. He says that no one has made philosophical sense of relative accessibility. Nuel Belnap seems to agree with him. Mathematically it is clear and elegant, but it is philosophically a bit flaccid. What is Dummett's proposal? While he doesn't explain exactly what the relative accessibility relation could be, he tries to give an example that would motivate having an accessibility relation that is not an equivalence relation. He does this by casting things in terms of states of affairs. Dummett's example motivates rejecting symmetry. A state of affairs S is accessible if a state of affairs T which is presupposed or required by S is in the world under consideration. Suppose we are in a world w that has T and S and is related to a world u that has S that is related to a world v that has neither. The relation is transitive so w is related to v. However, v is not related to w since v does not have S. It could still be related to u, but it is necessary for a world to have S to be related to a world with T. This seemed
somewhat convincing, although he admits the difficulties in extending this to something that would motivate abandoning transitivity.

Finally, Dummett expresses some worries about possible worlds talk in general. To quote, "I believe that the use by philosophers of possible-worlds semantics has done, on balance, more harm than good." In regards to his arguments in the article, Sir Michael says they "may possibly not be watertight".

Monday, January 08, 2007

New term, new classes

Since we got shafted on winter break (barely two weeks), we've already started classes. The line up for the coming semester is pretty nice: ethics core, modal logic, Wittgenstein, and philosophy of language. Ethics is mainly about reasons for action, focusing on Smith's Moral Problem and Korsgaard's Sources of Normativity. Modal logic is a modal logic class (shocking!) oriented to philosophers. I'm curious to see how that goes since I've done some modal logic in the Amsterdam school. Wittgenstein will focus on the early Wittgenstein and the relevant Russell. No PI, but lots of Tractatus. Philosophy of language is a survey of the debate on what semantics is (or what what-is-said is). This class looks similar to the one I took last winter with Ken Taylor, which, incidentally, was the original motivation for starting this blog. I'm also going to attempt to sit in on a syntax class in the linguistics department. It is on Chomskyan grammar. Since I've only had a class in HPSG, I'm curious how the other half (or other 90% as the case may be) thinks.

Sunday, January 07, 2007

Carroll on names

I never wrote anything about Through the Looking Glass, so I'll try to write up a few short, light posts on it. It is a delightful book.

Lewis Carroll plays around with names in Through the Looking Glass. Carroll was a logician, but he was not alive for the controversy surrounding the meaning of names that erupted in the 20th century. Instead of focusing on the narrowly logical use of names (I'm not sure whether he was familiar with Frege's logic or if the syllogistic logic he did know placed the same importance on names.), he talks about them in a different setting. For example, when Alice meets the wonderland bugs, she says she doesn't like bugs but knows some of their names. The wonderland bug asks her if they answer to their names, and she says she didn't think so. Then the bug says, "What's the use of their having names if they won't answer to them." Alice's answer is that it is no use to the bugs, rather it is useful to the humans. To over analyze this, the mistake the bug is making is that a thing's name must be known by the thing. Clearly, the point of the name is so that the one's naming can keep track of or classify the named things. While there is the faint echo of the logical use of names here (i.e. constants denoting objects, or in a different way of thinking, rigid designators), it seems more like names are meant in a more pragmatic or action theoretic way. By this I mean: it isn't that the name designates some individual or class, it is what the name enables one to do. Naming something "Gnat" doesn't just designate it, it aids us in keeping track of it, by asking others about it, etc. I think Follesdal wrote something along these lines in his dissertation, Referential Opacity and Modal Logic (which, in addition to the stuff on names, has a thorough analysis of Quine's problems with modality.).

Later on, when Alice meets Humpty Dumpty, he asks her what her name is. When she tells him, he asks her what it means, and she responds by asking if a name must mean something (she asked this doubtfully). Humpty Dumpty thinks that it must. According to him, his name means the shape he is. Alice's name doesn't specify any shape at all. I don't think Humpty Dumpty attributes anything more to his name than his shape. I took Alice to think that her name just meant (maybe referred to?) her, and that it didn't have any more meaning than that. Humpty Dumpty, interestingly, isn't quite a descriptivist. While he does want to associate some descriptive information with his name, it isn't a description that uniquely picks him out. It is just a description he satisfies, his shape. Would he answer to "Round" or "The shape you are"? Probably not. Lots of things are round. To play a game that I hear has been attributed to Lewis, let's imagine what the world would have to be like if Humpty Dumpty was right about the meaning of names. Names must mean something. It seems like it could become incorrect to call someone by their name if they changed in a drastic way, then. E.g., if Humpty Dumpty fell and cracked his shell, his shape would change. Since his name means the shape he is, it would seem that the meaning of his name would, in a sense change too. Supposing his name meant round rather than the shape he is, it seems like it would become wrong to call him by his name. It seems then that one must be careful to choose names with flexible enough meanings lest you become unable to use your name.

Tuesday, January 02, 2007

On the shoulders of giants or in their shadows?

Continuing a series of practically content-free posts, here's a great quote from Dummett's preface to his Logical Basis of Metaphysics, which finally showed up at Pitt's library.

"We all stand, or should stand, in the shadow of Wittgenstein, in the same way that much earlier generations once stood in the shadow of Kant; and one of my complaints about many contemporary American philosophers is that they appear never to have read Wittgenstein."

[Update: Follow ups here and here.]

Indices are wack...

Here's something I think everyone can support. All philosophical books should be available as searchable .pdfs. One reason is that large philosophical books are difficult to transport. For example, there's a nice logic handbook that is about 800 pages. As much as I might want to, I can't carry it with me wherever I go. Having a .pdf version would solve that. The main reason that all philosophical book should be available in .pdf is that indices in books by and large are awful. They range from non-existent to awful, with a very select few being good. For example, the translation of the first Critique that I used last semester had an index that failed, almost every time, to contain page numbers on the relevant words. More pressingly, several anthologies and handbooks I've read did not even have indices. This makes it quite hard to find particular quotes or definitions. If, as often is the case, one wants to write a paper that involves some close-reading of a text, it is invaluable to be able to find a particular phrase or quote. Or, say, one wants to find where in Wittgenstein's PI he uses the phrase "meaning as use." But, if, as is the case, he doesn't use the phrase, this leads to lots of time lost. So, the solution to this is to distribute the books in .pdf. The distribution is obviously going to be the hard part. Amazon has a nice program where you can access searchable .pdfs online for books you've bought through Amazon. Maybe academic institutions could set up some sort of online access to books. Or give them away for free. There's little evidence that free electronic copies of books lower sales of paper copies while there is some evidence that free distribution increases paper sales. In any case, sales of philosophical books are so low that increased notoriety can't do anything but help.

A few more links

I added a few links to the sidebar that are worth following. Nate Charlow is a grad student at Ann Arbor who I met when I was visiting schools. Social Science++ is the blog of a friend of mine from undergrad. It focuses on economics, computation, and social science issues. Finally, Nothing of Consequence is the blog of Ole Hjortland at St. Andrew's on logic and various related areas. In addition to the blog, Hjortland has a great master's thesis on logical consequence (on which I hope to write a post soon) on his website.

Monday, January 01, 2007

Expressing yourself

Does the idea of expressive power make sense when talking about natural languages? One will only be tempted to use that vocabulary, it seems, if one is already viewing natural languages in terms of first-order (or higher) logic, that is, viewing natural languages as containing terms and predicates with grammatical sentences as wffs. (Maybe, a slightly more nuanced way of putting the view is that the structure of first-order logic faithfully and straightforwardly represents that of natural languages; I don't want to rule out the idea of using first-order logic to represent natural languages, since ,e.g. HPSG does it for syntax; although the FOL representation of English by FOL in HPSG looks nothing like English and is pretty well unreadable.) A few authors use this vocabulary: Brandom, Dummett, Sellars. In order to use the notion of expressive power, we have to a well-defined idea of wffs for a language and ways of comparing what is expressible in one language and another. What will be at issue here won't be different languages in the sense of English and Japanese, but different languages in the sense of fragments of English, say, one with 'and' and one with 'and' and 'not'. While there is nothing wrong with viewing things like this when one is explicit about the regimentation, it is a little misleading.