Saturday, December 30, 2006

The power of pragmatism...

This wonderful strip has almost surely been linked to by other philosophy blogs in the past, but it is good. The comic is worth reading. Additionally, the archive is large enough that it can eat up several hours of time, for those procrastinators among us.

This wonderful strip features a guest appearance, of sorts, by one of my favorite idealists.

Friday, December 29, 2006

Dummett on Davidson on translation

In his "What is a theory of meaning? (I)", Dummett attributes an argument to Davidson about the inadequacy of a translation manual for meaning. The argument goes that one could have a complete translation manual from a language one doesn't understand into a language one doesn't understand. One could use this manual to get a perfectly adequate translation without understanding either the source or the target sentences. Since a theory of meaning is supposed to double as a theory of understanding for a language, it follows that translation manuals can't be all there is to meaning. This argument is used by Dummett against Davidson when he claims that (a) Davidson wants a theory of meaning to be modest (in the sense that you must already have the concepts in question to use it) and (b) that a modest theory of meaning does nothing over and above a translation manual. Premiss (a) is true. Most of the relevant argumentation goes to supporting (b), since convincing us of that will get Dummett to the conclusion that a theory of meaning should be full-blooded, as this is the only other option.

I don't know where this argument comes from in Davidson's work. I'd like to know. Dummett doesn't have a citation. It sounds like something that could be there. It seems like the thrust of the argument can be put another way. Translation manuals are solely syntactic (in the logician's sense). They map strings to strings (or if we have a fancy manual, phoneme sequences to phoneme sequences, or syntactic structures to syntactic structures). At no point in the use of a translation manual does meaning come into consideration. Of course, the use of a translation manual would require that source sentences be disambiguated and parsed properly. This job might require recourse to meanings, but apart from this possible presupposition there is no mention of meanings. So, Davidson's argument comes down to saying that syntax by itself can't take care of meaning cause syntax can't explain understanding. I fear there is some misinterpretation going on because I've just made Davidson sound like Chinese Room Searle. But, this might not be too far off base since Davidson's writings on truth emphasize the indispensability of the semantic in interpretation and in our theories of the world. See his "Material Mind" and "Mental Events" for examples of this.

Thursday, December 28, 2006

Naturally deducing an addendum

In my previous post I said that Prawitz included forall intro in his deduction rules rather than his inference rules but I couldn't remember why. I looked at his book again and it makes sense now. The forall-intro is a deduction rule for the following reason. While it has the form of an inference from A to \forall x A, there is something left implicit in the rule. The premiss A must be obtained from a deduction, that is one cannot assume A and then infer \forall x A, i.e. A can't be an undischarged hypothesis. (This happened to be my favorite mistake to make in doing natural deduction proofs.) If one could do this then one could obtain A->\forall x A, for arbitrary A, but this doesn't hold in general. E.g. you'd get if x is 0, then for all x, x is 0. I'm not sure why it wasn't built into the formulation of the natural deduction rule for forall-intro that the premiss be the conclusion of a deduction, but this is explicit when the specific deduction rule formulation is given.

Sunday, December 24, 2006

Naturally deducing

I worked through most of Prawitz's Natural Deduction. I was aided in this by two things: (1) the book is short and (2) a lot of the material is very similar to stuff I worked through in proof theory last term. I was surprised at how straightforward the proof of the normal proof theorem was given that Gentzen devised the sequent calculus in order to make meta-proofs like that simpler. I'm not complaining though. I thoroughly like natural deduction systems. He didn't talk about the inference rules as much as I thought he would. He talked about them some. I should rephrase: he didn't talk about them much in the way I was hoping he would although he explained the technical ideas behind them. He's trying to show how the natural deduction inference rules can constitute the meaning of the connectives (and quantification, modality, and abstraction?). One thing that was neat was the separation of some of the rules into inference rules and deduction rules. The rules for &, v-intro, exists-intro, forall-elim, ->-elim, and ex falso are proper inference rules. The v-elim, exists-elim, ->-intro, and forall-intro (I'm pretty sure forall-intro was in there, although it escapes me why) are deduction rules. This is because they all require additional deductions in order to be used. For example, the v-elim rule requires deductions from assumptions of each disjunct, so eliminating v in AvB would require deriving C from assumption A and deriving C from assumption B. Then you can infer to C. This seemed like a good distinction to make since it is a natural way of splitting the rules. I imagine that this distinction would get a little more action in the more general proof-theoretic semantics of, say, Brandom since there would just be more rules.

Another thing that I found interesting was the final chapters on the proof theories of second-order logic and modal logics. The main thing I'll note is how the intro and elim rules for the modalities mirrored the rules for quantifiers. I've heard this before in my modal logic classes, and I've seen this in the translation from modal logic to FOL, but the structure of the intro and elim rules for the quantifiers was the same (with a small caveat on the necessity intro rule) as that of the respective modalities. I was wondering how that would play out in the natural deduction proof theory of modal logic since I had only seen the axiom systems for the various modal logics.

Saturday, December 23, 2006

Killing time

This post is going to be a few thoughts about Killing Time without much argumentative point. I read Feyerabend's autobiography Killing Time (KT) at the start of break as a belated follow up to his Against Method (AM). KT confirmed my initial feeling that AM had a Wittgensteinian flavor. Feyerabend was apparently into Wittgenstein for a while. He wrote up an interpretation of the Philosophical Investigations. I don't have enough background in Galileo and the debate between Feyerabend and Lakatos (or philosophy of science generally) to evaluate the arguments and interpretations of AM in detail. It was primarily a case-study of Galileo's experiments and promulgation of his physical theories. This formed the basis of an argument that there was no such thing as the scientific method. I'm not going to go into details on that cause (1) it took up most of the book and (2) I can't remember it that well. No matter how convinced I am by Feyerabend's arguments, I think his approach is great and KT cemented that feeling. His approach seemed to be to look at a bunch of examples, adopt unorthodox positions (this is later in life; early on he was very into Popper), and be iconoclastic and aggressive in argumentation. These points may seem disparate, but there is a common thread. At least, the thread I see is part of his so called epistemological anarchism, that is adopting a variety of views from which to approach a problem in the hopes that previously unnoticed or unnoticeable avenues of inquiry will appear. There is something in this that I see in a couple of other people that I am also into: Wittgenstein (hopefully obvious) and Feynman (maybe a future post on philosophy of science and Feynman...). Adopting this approach (in philosophy at least) results in not being a system builder. But, it also seems to result in coming up with provocative explanations and arguments why people other people are wrong. This is based on an induction from two instances.

KT was an entertaining book. The narrative gets a bit messy in places. One of the most surprising parts of the book was Feyerabend talking about his voice lessons. He was apparently a very good singer, and he was also a big fan of opera; he talks about the show that he saw in great detail. Feyerabend was a funny guy.

Tuesday, December 19, 2006

Reading list

Following the lead of several people I know, I figure I'll post a reading list for this break. Since the break is already well underway, I don't think I'll finish all of it.
Natural Deduction by Prawitz
Seas of Language by Dummett
Through the Looking Glass by Carroll
Artificial Intelligence by Haugeland
[Update: Ole Hjortland's master's thesis]
I might try to read some stuff by Gentzen and post more to this blog as well. I'm also hoping to finish the Sopranos season 1. There's also this book on optimality theory that would be slick but not terribly likely.

Is there something connecting all these books? No. The Prawitz and the Dummett are both offshoots of this interest in inference that I've developed from classes last term. The Haugeland book is there cause I read some stuff by Haugeland for Brandom's seminar and thought highly of it. I'm curious what he says about AI. The Carroll is there cause I like it and want to reread it. It doesn't have anything to do with the other things on the list. It is a fantastic book and I will probably write up something about Humpty Dumpty in the near future. It is also short enough that I will be able to check it off. The Gentzen articles are there for the same reason as the Prawitz and Dummett. I've read some of them, but I didn't spend the sort of time on them that they deserve. [Update: Hjortland's thesis comes on the recommendation of Aidan and fits into the same camp as the Dummett.]

Saturday, December 16, 2006

A link

I thought I'd follow a couple of lengthy posts by a shorter post with a much better signal-to-noise ratio. Over at Brain Hammer, a blog I stumbled upon via Philosophy of Brains, there is a link to a review by Dennett of Brandom's Making It Explicit. I skimmed it and it looks quite good. Dennett agrees with Brandom on a surprising amount. One of the things Dennett criticizes is Brandom's "flight from naturalism." This was particularly interesting since Brandom just taught a class on philosophical naturalism in which I got a much better sense of his position. There is some good stuff in there about the role of communities in meaning, interpretation, and intentionality in non-linguistic creatures. The link to Dennett's article is here.

Friday, December 15, 2006

A matter of interpretation, or the connection between Humpty Dumpty and the folk

In From Metaphysics to Ethics, Frank Jackson at one point cites one of my favorite passages in literature for an odd purpose. The passage in question is the Humpty Dumpty chapter of Through the Looking Glass. He says that we can mean whatever we want by our words but if we want to address the concerns of others, we should mean what they mean. From this, he concludes that, e.g. if we talk about goodness, we should identify our subject matter through the folk theory of morality. This is on p. 118, roughly middle of the page. It strikes me as a little disingenuous, although not as bad as when I originally read it. The point that Davidson made (or I took him to make) was that Humpty Dumpty was wrong; we can't mean whatever we want. We can mean whatever we want insofar as we can still be understood by our interpreters. If we fail at that, then we haven't meant much.

The best I can do with what Jackson said is the following. He seems to be reading the "mean whatever we want" as the entitlement to define new jargon for theories. In order to make our theories relevant to the folk, we must talk about what they talk about. What they talk about is determined by their folk theories (at what point are these up and running such that we have a subject matter?), so we should define our jargon to accord with their theories. At least, that's as far as I get in understanding this bit of his book.

This point is somewhat inconsequential to the rest of the book, but it still bothers me. He seems to me to be missing the point of the Humpty Dumpty chapter. At least, the point as I view it through the lens of Davidson. The trail of reasoning that leads from that to the need to use folk theories of X is odd. In order to talk about things that the folk care about, we need to make sure what we're talking about satisfies folk theoretic properties. I'm confused how this is supposed to go. Aboutness and subject matter aren't matters of folk theories. Presumably, the particular folk, call him Folksy, in question will only be concerned about theories that match the folk theoretic properties if Folksy has that particular folk theory. If Folksy is not with the folk majority as far as his theory of, say, espresso then it seems there is no reason he'd be engaged if I talk to him about an espresso concept that is defined from the folk theory thereof. What is more relevant is Folksy's theory of espresso.

I'll have to go back through the Humpty Dumpty chapter and make sure I'm understanding it correctly. The Davidson article "A Nice Derangement of Epitaphs" left quite an impression on me even though I'm fairly doubtful that his arguments are valid in that one. This will be a good chance to go back over some of my favorite reading material. It also gives me a bit of a chance to feel out what I find so offputting about some of what Jackson says regarding folk theories.

Thursday, December 14, 2006

A few reflections

I just finished up fall semester. The classes I've taken this term have meshed well together, in a somewhat odd way, so I think I'll talk about that a bit. This post will be a little scattered. It is also more for my benefit than anyone else's.

First, proof theory. My teacher for it, Jeremy Avigad at CMU, was quite good. He was very clear and motivating. We talked a lot about some of the philosophical background of this stuff. I feel like it improved my logic chops some. I'm curious to see some more applications of cut elimination. We covered the necessary foundational stuff that I feel like I can engage with some of the literature now. In particular, I want to go back and evaluate some of Dummett's writings (again and for the first time). I think there are some strands that might have been lingering that I'll be able to pick up on. Another project that I think I'll start on the side is research into inferences, broadly. For reasons I'm not completely clear on, I've become fascinated by rules of inference and inferences. I've started working on compiling some information about them. I'm not sure if that will develop into much, but it will probably occupy future posts.

Second, the M&E seminar. This was less exciting. I'm glad for having done the readings since some of them were things I've meant to read for a while. It confirmed my suspicion that Gettier cases just don't excite me and that I like Quine. At the very least, it is instructive to figure out how Quine went wrong.

Third, Kant. We made it through the schematism in the first Critique. I feel like I've gotten a solid background in the material. I got a bit of a grip on the overall dialectic. I can certainly see why it occupies such a large role in the history of philosophy. This connects up with some of my other interests through the role that judgments play in the arguments of the critique through the deduction. I read through Kant and the Capacity to Judge, and I'm pretty well convinced that the semantic picture is important for understanding the structure. Just as an aside, doing well on the first paper I wrote for that is probably the thing I'm proudest of this term.


Finally, the seminar on philosophical naturalism. We covered a large number of topics in this class. The thing I wrote my paper on was supervenience and physicalism. Once I get some comments back about my paper I'll probably throw up a few posts on that subject. This class was extremely satisfying for me since it was different enough from what I'm used to to throw me out of my philosophical comfort zone but not different enough to require huge amounts of work to follow. This led to it being extremely philosophically insightful. I think I got a lot of perspective out of it. Brandom went through a large number of arguments, and I've been tempted to write about them. Many of them were new things, and I feel a bit uneasy about writing about those things. I got some ideas about things to follow up on, namely some philosophy of language/metaphysics stuff about sortals, some more intersections between formal logic stuff and less formal philosophical stuff, and some things about Sellars. There was also a decent amount of outside literature I had to go through for my final paper, although a fair amount didn't make it in. To plug two things that probably don't get read enough: Etchemendy's Concept of Logical Consequence and Field's Science without Numbers are both fantastic reads with a lot of depth.


What is the common thread between my classes? One is the role that inference plays in them. It is center stage in proof theory, and it made small guest appearances in Brandom's class and Kant (through judgments). I was also able to connect all of my classes back to my philosophy of language interest for the most part, except M&E which mostly was good background reading. I've also gotten more interested in philosophical logic this term. This is in large part due to Brandom and Avigad, although neither taught about that specifically. I was able to have some long discussions about related issues with them though. Also, I think my writing improved.

This is all hopelessly vague. I will try to post some about these topics over break. In short, Pitt is grand, I need to read me some Dummett and Prawitz, and semesters last too long.

I declare victory over fall semester

I turned in my last required paper for this semester today at 4:30. I win. It was for the philosohical naturalism class with Brandom. I'm feeling pretty good about it. After I decompress a bit I will probably put up some reflections on the term and what not.

Tuesday, December 12, 2006

Another silly linguistic observation, plus a non-sequitur

Another conversational quirk I've noticed among philosohers is one that seems to be more prevalent among the ones that have done a fair amount of math. When confronted with a formulation of something that they don't understand, they say, "What does that even mean?" It isn't "what does that mean?" Rather it is with an "even" stuck in there. I'm not sure why, but it sounds stronger even though I'm not sure how it is asking for anything stronger than the "even"-less question. What does "even mean" even mean?

To change the topic, I was talking to some guys in the Pitt program earlier about great philosophical dialogues to be written. One guy suggested a dialogue between early and later Putnam in which the early Putnam wins. Another one might be a conversation between early and later Wittgenstein. What would they say? Maybe the early one would just recite the main propositions of the Tractatus while the later Wittgenstein would explain why they are nonsense.

Sunday, December 10, 2006

And just as hard to use (II)

In the previous post on combinators and lambdas, I had said that the lambda calculus is not first-order cause of variable binding. This was hopelessly vague (nod to Richard Zach for pointing this out), so I'm going to clarify. First-order logic has variable binding in the form of quantifiers. The lambda calculus has variable binding by lambdas. What's the difference? The lambdas are in terms, so in the lambda calculus, there is variable binding at the term level. The quantifiers of, e.g., FOL are only in formulas, not terms. They bind at the formula level. The reason that FOL+Combinators can't fully simulate lambda abstraction is because it can't have binding at the term level. There are extensions of FOL that allow this, but they were not what I was talking about. I hope that clears up what I was trying to say: a joke that required too much set up.

Saturday, December 09, 2006

Can anyone explain this?

My roommate sent me a link to a video on YouTube that features a guy reading a bunch of Kit Fine's books. Can anyone explain it? I'm at a complete loss.

There are no singular terms in English

Here's a post based on something that a teacher of mine once said. I'll try to reconstruct the dialectic he proposed. Logicians distinguish predicates and terms. One kind of term is the singular term, which, in Quine's words, purports to denote a single object. Why are there none in English? For one, English sentences are composed of words, not predicates and terms/singular terms. Further, there are no natural linguistic kinds that could be candidates for singular terms. Noun is out, since, while it would cover "George" it would not cover "the dog". Noun phrase (NP) (or determiner phrase, depending on your favorite syntactic theory) will cover both of those, but it will also admit "the dogs", which is plural although possibly a singular term if groups of things are singular. We also have to deal with mass nouns, such as "water," which are not singular terms (at least, I think not). Determiners like "several" and "some" will also need to be included since those denote single things (if groups are singular). Determiner phrases (DP) are not noun phrases in HPSG though. (Forgive me; that is the syntactic theory I'm most comfortable with.) Similarly, complementizer phrases (CP) aren't noun phrases either, so prases such as "that my laptop is on" will fall outside the scope of "singular term." Well, suppose we take all three of those, NP,DP, and CP, as our singular terms. Modulo the worry about plurals flagged above, we haven't isolated anything that looks like a natural linguistic kind. Supposing one said that whatever acts as a subject for a verb phrase (VP) is a singular term. This just seems false. Besides there being at least prima facie words that seem to denote more than one thing, there is another wrinkle: things like the dummy "it" in "it is snowing" or the existential "there" in "there is frost in my room." Both are occupying putative subject positions, neither are referential in any sense. I haven't mentioned pronouns (although HPSG incorporates the "it" and "there" in as pronouns), but similar considerations should apply. There are then no necessary or sufficient conditions for singular termhood in English. This is not to say that a regimented fragment couldn't have singular terms. Nor is it to say that the semantic representation of a sentence couldn't have singular terms. Just not English.

I think there may have been further considerations about the syntactic properties of various phrase kinds offered as well, but those completely escape my memory. A similar argument could be run about how there are no predicates in English. What do you think? I'm not sure how convinced I am by it or how convinced I should be. On one level, I think it is right. English (mutatis mutandis for other natural languages) is not constructed out of terms. The syntax of English is not the syntax of FOL in anything resembling a straightforward sense.

Monday, December 04, 2006

Some call it a "blog roll"

I've updated the list of other blogs I read. This is in part spurred on by the sudden rush of comments (like 3 in the last week). But really, what better time than the end of the term to do things like this?

In among the links are a couple of non-blog websites. Those are worth checking out too. In particular Philosophy Talk, since it is, in my estimation, a very worthwhile project. I'm not just saying that cause I used to work for them. It is a very good project all around. And now it is on Sundays.

Sunday, December 03, 2006

And just as hard to use...

Here's a class related post that comes out of an explanation I got recently. It is also a bit rough, so be gentle. You can simulate lambda-abstraction in combinatory logic. You want to do this to get some of the lambda-abstraction flexibility without leaving a first-order (with types) framework. Lambda calculus is not first-order because terms (lambdas) can bind variables, which you can't do in FOL. How are lambdas different than quantifiers? One's a term and the other isn't, maybe? I'm not completely clear on that. But, the combinators are first order. The simulated abstraction doesn't work in all cases. It can't with the resources of just combinatory logic. One case it fails is (\lambda x.t)[s/z]=\lambda x(t[s/z]) (From Jeremy Avigad). Why might one want to do simulate the abstraction at all? There is a result that shows there is an isomorphism between proofs and programs, and so types (and categories, apparently). This means that one can think of a proof as a program, which is definable via the lambda calculus. This in turn means that one can think of proofs as terms (formulas? I think terms) in the lambda calculus. What sort of proof though? The terms are natural deduction proofs, with the abstracted terms viewed as undischarged hypotheses. Application is discharging a hypothesis. So, one gets that lambda calculus is the type theoretic analog of natural deduction. What are combinators then? Combinators are the type theoretic analog of axiomatic proofs. The route to seeing this is by looking at the types that make up the combinators. K is p->(q->p) and S is (p->q->r)->((p->q)->(p->r)). These types are structurally the same as the first two axoims of propositional logic (as usually presented; they'll be in there somewhere). The punchline: combinators are the type theoretic analog of axiomatic proofs, and they are just as hard to use. Now, the question is: what is the type theoretic analog of the sequent calculus?

Friday, December 01, 2006

Speakers as measuring devices

In his "Predicate Meets Property," Mark Wilson presents a challenge to the traditional (Frege-Russell) view of predicates expressing or being attached to properties. The challenge is to explain how the extension of the predicate (determined by the property or universal) changes when the usage of the predicate in the community changes. He has a few examples from non-technical usage and a few from the history of science to illustrate this point. E.g., how are the extensions of the predicate "is an electron" different when it was used in Thompson's time and then in the late 20th century. His idea is to make determining the extension of a predicate on a par with measurement in experiments. That is to say that he wants to view speakers as measuring devices of a sort. Measuring devices have a fairly limited range of circumstances in which they will give the correct results and they have an extremely limited range of things they can detect with any accuracy. This means including in descriptions of what speakers are meaning when they make their utterances parameters for, e.g. background conditions. I think I made this sound much less radical than it is. Anyway. I think that the move to viewing speakers as measuring devices fits with Marconi's model of referential competence. The focus in Wilson's article is not on the spooky reference relation; it is on the practice of speakers applying words to things. This description should make clear the connection to Marconi's work. His referential competence is the ability of speakers to apply words to things in accordance with the meanings they associate with the words. In particular, this seems like it could flesh out the further distinction Marconi draws within referential competence of application and recognition.

I've gotten excited about Marconi's book again (last time was September when we had a brief email correspondence) in part due to a post over at Philosophy of brains on Lexical Competence.