Saturday, December 30, 2006

The power of pragmatism...

This wonderful strip has almost surely been linked to by other philosophy blogs in the past, but it is good. The comic is worth reading. Additionally, the archive is large enough that it can eat up several hours of time, for those procrastinators among us.

This wonderful strip features a guest appearance, of sorts, by one of my favorite idealists.

Friday, December 29, 2006

Dummett on Davidson on translation

In his "What is a theory of meaning? (I)", Dummett attributes an argument to Davidson about the inadequacy of a translation manual for meaning. The argument goes that one could have a complete translation manual from a language one doesn't understand into a language one doesn't understand. One could use this manual to get a perfectly adequate translation without understanding either the source or the target sentences. Since a theory of meaning is supposed to double as a theory of understanding for a language, it follows that translation manuals can't be all there is to meaning. This argument is used by Dummett against Davidson when he claims that (a) Davidson wants a theory of meaning to be modest (in the sense that you must already have the concepts in question to use it) and (b) that a modest theory of meaning does nothing over and above a translation manual. Premiss (a) is true. Most of the relevant argumentation goes to supporting (b), since convincing us of that will get Dummett to the conclusion that a theory of meaning should be full-blooded, as this is the only other option.

I don't know where this argument comes from in Davidson's work. I'd like to know. Dummett doesn't have a citation. It sounds like something that could be there. It seems like the thrust of the argument can be put another way. Translation manuals are solely syntactic (in the logician's sense). They map strings to strings (or if we have a fancy manual, phoneme sequences to phoneme sequences, or syntactic structures to syntactic structures). At no point in the use of a translation manual does meaning come into consideration. Of course, the use of a translation manual would require that source sentences be disambiguated and parsed properly. This job might require recourse to meanings, but apart from this possible presupposition there is no mention of meanings. So, Davidson's argument comes down to saying that syntax by itself can't take care of meaning cause syntax can't explain understanding. I fear there is some misinterpretation going on because I've just made Davidson sound like Chinese Room Searle. But, this might not be too far off base since Davidson's writings on truth emphasize the indispensability of the semantic in interpretation and in our theories of the world. See his "Material Mind" and "Mental Events" for examples of this.

Thursday, December 28, 2006

Naturally deducing an addendum

In my previous post I said that Prawitz included forall intro in his deduction rules rather than his inference rules but I couldn't remember why. I looked at his book again and it makes sense now. The forall-intro is a deduction rule for the following reason. While it has the form of an inference from A to \forall x A, there is something left implicit in the rule. The premiss A must be obtained from a deduction, that is one cannot assume A and then infer \forall x A, i.e. A can't be an undischarged hypothesis. (This happened to be my favorite mistake to make in doing natural deduction proofs.) If one could do this then one could obtain A->\forall x A, for arbitrary A, but this doesn't hold in general. E.g. you'd get if x is 0, then for all x, x is 0. I'm not sure why it wasn't built into the formulation of the natural deduction rule for forall-intro that the premiss be the conclusion of a deduction, but this is explicit when the specific deduction rule formulation is given.

Sunday, December 24, 2006

Naturally deducing

I worked through most of Prawitz's Natural Deduction. I was aided in this by two things: (1) the book is short and (2) a lot of the material is very similar to stuff I worked through in proof theory last term. I was surprised at how straightforward the proof of the normal proof theorem was given that Gentzen devised the sequent calculus in order to make meta-proofs like that simpler. I'm not complaining though. I thoroughly like natural deduction systems. He didn't talk about the inference rules as much as I thought he would. He talked about them some. I should rephrase: he didn't talk about them much in the way I was hoping he would although he explained the technical ideas behind them. He's trying to show how the natural deduction inference rules can constitute the meaning of the connectives (and quantification, modality, and abstraction?). One thing that was neat was the separation of some of the rules into inference rules and deduction rules. The rules for &, v-intro, exists-intro, forall-elim, ->-elim, and ex falso are proper inference rules. The v-elim, exists-elim, ->-intro, and forall-intro (I'm pretty sure forall-intro was in there, although it escapes me why) are deduction rules. This is because they all require additional deductions in order to be used. For example, the v-elim rule requires deductions from assumptions of each disjunct, so eliminating v in AvB would require deriving C from assumption A and deriving C from assumption B. Then you can infer to C. This seemed like a good distinction to make since it is a natural way of splitting the rules. I imagine that this distinction would get a little more action in the more general proof-theoretic semantics of, say, Brandom since there would just be more rules.

Another thing that I found interesting was the final chapters on the proof theories of second-order logic and modal logics. The main thing I'll note is how the intro and elim rules for the modalities mirrored the rules for quantifiers. I've heard this before in my modal logic classes, and I've seen this in the translation from modal logic to FOL, but the structure of the intro and elim rules for the quantifiers was the same (with a small caveat on the necessity intro rule) as that of the respective modalities. I was wondering how that would play out in the natural deduction proof theory of modal logic since I had only seen the axiom systems for the various modal logics.

Saturday, December 23, 2006

Killing time

This post is going to be a few thoughts about Killing Time without much argumentative point. I read Feyerabend's autobiography Killing Time (KT) at the start of break as a belated follow up to his Against Method (AM). KT confirmed my initial feeling that AM had a Wittgensteinian flavor. Feyerabend was apparently into Wittgenstein for a while. He wrote up an interpretation of the Philosophical Investigations. I don't have enough background in Galileo and the debate between Feyerabend and Lakatos (or philosophy of science generally) to evaluate the arguments and interpretations of AM in detail. It was primarily a case-study of Galileo's experiments and promulgation of his physical theories. This formed the basis of an argument that there was no such thing as the scientific method. I'm not going to go into details on that cause (1) it took up most of the book and (2) I can't remember it that well. No matter how convinced I am by Feyerabend's arguments, I think his approach is great and KT cemented that feeling. His approach seemed to be to look at a bunch of examples, adopt unorthodox positions (this is later in life; early on he was very into Popper), and be iconoclastic and aggressive in argumentation. These points may seem disparate, but there is a common thread. At least, the thread I see is part of his so called epistemological anarchism, that is adopting a variety of views from which to approach a problem in the hopes that previously unnoticed or unnoticeable avenues of inquiry will appear. There is something in this that I see in a couple of other people that I am also into: Wittgenstein (hopefully obvious) and Feynman (maybe a future post on philosophy of science and Feynman...). Adopting this approach (in philosophy at least) results in not being a system builder. But, it also seems to result in coming up with provocative explanations and arguments why people other people are wrong. This is based on an induction from two instances.

KT was an entertaining book. The narrative gets a bit messy in places. One of the most surprising parts of the book was Feyerabend talking about his voice lessons. He was apparently a very good singer, and he was also a big fan of opera; he talks about the show that he saw in great detail. Feyerabend was a funny guy.

Tuesday, December 19, 2006

Reading list

Following the lead of several people I know, I figure I'll post a reading list for this break. Since the break is already well underway, I don't think I'll finish all of it.
Natural Deduction by Prawitz
Seas of Language by Dummett
Through the Looking Glass by Carroll
Artificial Intelligence by Haugeland
[Update: Ole Hjortland's master's thesis]
I might try to read some stuff by Gentzen and post more to this blog as well. I'm also hoping to finish the Sopranos season 1. There's also this book on optimality theory that would be slick but not terribly likely.

Is there something connecting all these books? No. The Prawitz and the Dummett are both offshoots of this interest in inference that I've developed from classes last term. The Haugeland book is there cause I read some stuff by Haugeland for Brandom's seminar and thought highly of it. I'm curious what he says about AI. The Carroll is there cause I like it and want to reread it. It doesn't have anything to do with the other things on the list. It is a fantastic book and I will probably write up something about Humpty Dumpty in the near future. It is also short enough that I will be able to check it off. The Gentzen articles are there for the same reason as the Prawitz and Dummett. I've read some of them, but I didn't spend the sort of time on them that they deserve. [Update: Hjortland's thesis comes on the recommendation of Aidan and fits into the same camp as the Dummett.]

Saturday, December 16, 2006

A link

I thought I'd follow a couple of lengthy posts by a shorter post with a much better signal-to-noise ratio. Over at Brain Hammer, a blog I stumbled upon via Philosophy of Brains, there is a link to a review by Dennett of Brandom's Making It Explicit. I skimmed it and it looks quite good. Dennett agrees with Brandom on a surprising amount. One of the things Dennett criticizes is Brandom's "flight from naturalism." This was particularly interesting since Brandom just taught a class on philosophical naturalism in which I got a much better sense of his position. There is some good stuff in there about the role of communities in meaning, interpretation, and intentionality in non-linguistic creatures. The link to Dennett's article is here.

Friday, December 15, 2006

A matter of interpretation, or the connection between Humpty Dumpty and the folk

In From Metaphysics to Ethics, Frank Jackson at one point cites one of my favorite passages in literature for an odd purpose. The passage in question is the Humpty Dumpty chapter of Through the Looking Glass. He says that we can mean whatever we want by our words but if we want to address the concerns of others, we should mean what they mean. From this, he concludes that, e.g. if we talk about goodness, we should identify our subject matter through the folk theory of morality. This is on p. 118, roughly middle of the page. It strikes me as a little disingenuous, although not as bad as when I originally read it. The point that Davidson made (or I took him to make) was that Humpty Dumpty was wrong; we can't mean whatever we want. We can mean whatever we want insofar as we can still be understood by our interpreters. If we fail at that, then we haven't meant much.

The best I can do with what Jackson said is the following. He seems to be reading the "mean whatever we want" as the entitlement to define new jargon for theories. In order to make our theories relevant to the folk, we must talk about what they talk about. What they talk about is determined by their folk theories (at what point are these up and running such that we have a subject matter?), so we should define our jargon to accord with their theories. At least, that's as far as I get in understanding this bit of his book.

This point is somewhat inconsequential to the rest of the book, but it still bothers me. He seems to me to be missing the point of the Humpty Dumpty chapter. At least, the point as I view it through the lens of Davidson. The trail of reasoning that leads from that to the need to use folk theories of X is odd. In order to talk about things that the folk care about, we need to make sure what we're talking about satisfies folk theoretic properties. I'm confused how this is supposed to go. Aboutness and subject matter aren't matters of folk theories. Presumably, the particular folk, call him Folksy, in question will only be concerned about theories that match the folk theoretic properties if Folksy has that particular folk theory. If Folksy is not with the folk majority as far as his theory of, say, espresso then it seems there is no reason he'd be engaged if I talk to him about an espresso concept that is defined from the folk theory thereof. What is more relevant is Folksy's theory of espresso.

I'll have to go back through the Humpty Dumpty chapter and make sure I'm understanding it correctly. The Davidson article "A Nice Derangement of Epitaphs" left quite an impression on me even though I'm fairly doubtful that his arguments are valid in that one. This will be a good chance to go back over some of my favorite reading material. It also gives me a bit of a chance to feel out what I find so offputting about some of what Jackson says regarding folk theories.

Thursday, December 14, 2006

A few reflections

I just finished up fall semester. The classes I've taken this term have meshed well together, in a somewhat odd way, so I think I'll talk about that a bit. This post will be a little scattered. It is also more for my benefit than anyone else's.

First, proof theory. My teacher for it, Jeremy Avigad at CMU, was quite good. He was very clear and motivating. We talked a lot about some of the philosophical background of this stuff. I feel like it improved my logic chops some. I'm curious to see some more applications of cut elimination. We covered the necessary foundational stuff that I feel like I can engage with some of the literature now. In particular, I want to go back and evaluate some of Dummett's writings (again and for the first time). I think there are some strands that might have been lingering that I'll be able to pick up on. Another project that I think I'll start on the side is research into inferences, broadly. For reasons I'm not completely clear on, I've become fascinated by rules of inference and inferences. I've started working on compiling some information about them. I'm not sure if that will develop into much, but it will probably occupy future posts.

Second, the M&E seminar. This was less exciting. I'm glad for having done the readings since some of them were things I've meant to read for a while. It confirmed my suspicion that Gettier cases just don't excite me and that I like Quine. At the very least, it is instructive to figure out how Quine went wrong.

Third, Kant. We made it through the schematism in the first Critique. I feel like I've gotten a solid background in the material. I got a bit of a grip on the overall dialectic. I can certainly see why it occupies such a large role in the history of philosophy. This connects up with some of my other interests through the role that judgments play in the arguments of the critique through the deduction. I read through Kant and the Capacity to Judge, and I'm pretty well convinced that the semantic picture is important for understanding the structure. Just as an aside, doing well on the first paper I wrote for that is probably the thing I'm proudest of this term.


Finally, the seminar on philosophical naturalism. We covered a large number of topics in this class. The thing I wrote my paper on was supervenience and physicalism. Once I get some comments back about my paper I'll probably throw up a few posts on that subject. This class was extremely satisfying for me since it was different enough from what I'm used to to throw me out of my philosophical comfort zone but not different enough to require huge amounts of work to follow. This led to it being extremely philosophically insightful. I think I got a lot of perspective out of it. Brandom went through a large number of arguments, and I've been tempted to write about them. Many of them were new things, and I feel a bit uneasy about writing about those things. I got some ideas about things to follow up on, namely some philosophy of language/metaphysics stuff about sortals, some more intersections between formal logic stuff and less formal philosophical stuff, and some things about Sellars. There was also a decent amount of outside literature I had to go through for my final paper, although a fair amount didn't make it in. To plug two things that probably don't get read enough: Etchemendy's Concept of Logical Consequence and Field's Science without Numbers are both fantastic reads with a lot of depth.


What is the common thread between my classes? One is the role that inference plays in them. It is center stage in proof theory, and it made small guest appearances in Brandom's class and Kant (through judgments). I was also able to connect all of my classes back to my philosophy of language interest for the most part, except M&E which mostly was good background reading. I've also gotten more interested in philosophical logic this term. This is in large part due to Brandom and Avigad, although neither taught about that specifically. I was able to have some long discussions about related issues with them though. Also, I think my writing improved.

This is all hopelessly vague. I will try to post some about these topics over break. In short, Pitt is grand, I need to read me some Dummett and Prawitz, and semesters last too long.

I declare victory over fall semester

I turned in my last required paper for this semester today at 4:30. I win. It was for the philosohical naturalism class with Brandom. I'm feeling pretty good about it. After I decompress a bit I will probably put up some reflections on the term and what not.

Tuesday, December 12, 2006

Another silly linguistic observation, plus a non-sequitur

Another conversational quirk I've noticed among philosohers is one that seems to be more prevalent among the ones that have done a fair amount of math. When confronted with a formulation of something that they don't understand, they say, "What does that even mean?" It isn't "what does that mean?" Rather it is with an "even" stuck in there. I'm not sure why, but it sounds stronger even though I'm not sure how it is asking for anything stronger than the "even"-less question. What does "even mean" even mean?

To change the topic, I was talking to some guys in the Pitt program earlier about great philosophical dialogues to be written. One guy suggested a dialogue between early and later Putnam in which the early Putnam wins. Another one might be a conversation between early and later Wittgenstein. What would they say? Maybe the early one would just recite the main propositions of the Tractatus while the later Wittgenstein would explain why they are nonsense.

Sunday, December 10, 2006

And just as hard to use (II)

In the previous post on combinators and lambdas, I had said that the lambda calculus is not first-order cause of variable binding. This was hopelessly vague (nod to Richard Zach for pointing this out), so I'm going to clarify. First-order logic has variable binding in the form of quantifiers. The lambda calculus has variable binding by lambdas. What's the difference? The lambdas are in terms, so in the lambda calculus, there is variable binding at the term level. The quantifiers of, e.g., FOL are only in formulas, not terms. They bind at the formula level. The reason that FOL+Combinators can't fully simulate lambda abstraction is because it can't have binding at the term level. There are extensions of FOL that allow this, but they were not what I was talking about. I hope that clears up what I was trying to say: a joke that required too much set up.

Saturday, December 09, 2006

Can anyone explain this?

My roommate sent me a link to a video on YouTube that features a guy reading a bunch of Kit Fine's books. Can anyone explain it? I'm at a complete loss.

There are no singular terms in English

Here's a post based on something that a teacher of mine once said. I'll try to reconstruct the dialectic he proposed. Logicians distinguish predicates and terms. One kind of term is the singular term, which, in Quine's words, purports to denote a single object. Why are there none in English? For one, English sentences are composed of words, not predicates and terms/singular terms. Further, there are no natural linguistic kinds that could be candidates for singular terms. Noun is out, since, while it would cover "George" it would not cover "the dog". Noun phrase (NP) (or determiner phrase, depending on your favorite syntactic theory) will cover both of those, but it will also admit "the dogs", which is plural although possibly a singular term if groups of things are singular. We also have to deal with mass nouns, such as "water," which are not singular terms (at least, I think not). Determiners like "several" and "some" will also need to be included since those denote single things (if groups are singular). Determiner phrases (DP) are not noun phrases in HPSG though. (Forgive me; that is the syntactic theory I'm most comfortable with.) Similarly, complementizer phrases (CP) aren't noun phrases either, so prases such as "that my laptop is on" will fall outside the scope of "singular term." Well, suppose we take all three of those, NP,DP, and CP, as our singular terms. Modulo the worry about plurals flagged above, we haven't isolated anything that looks like a natural linguistic kind. Supposing one said that whatever acts as a subject for a verb phrase (VP) is a singular term. This just seems false. Besides there being at least prima facie words that seem to denote more than one thing, there is another wrinkle: things like the dummy "it" in "it is snowing" or the existential "there" in "there is frost in my room." Both are occupying putative subject positions, neither are referential in any sense. I haven't mentioned pronouns (although HPSG incorporates the "it" and "there" in as pronouns), but similar considerations should apply. There are then no necessary or sufficient conditions for singular termhood in English. This is not to say that a regimented fragment couldn't have singular terms. Nor is it to say that the semantic representation of a sentence couldn't have singular terms. Just not English.

I think there may have been further considerations about the syntactic properties of various phrase kinds offered as well, but those completely escape my memory. A similar argument could be run about how there are no predicates in English. What do you think? I'm not sure how convinced I am by it or how convinced I should be. On one level, I think it is right. English (mutatis mutandis for other natural languages) is not constructed out of terms. The syntax of English is not the syntax of FOL in anything resembling a straightforward sense.

Monday, December 04, 2006

Some call it a "blog roll"

I've updated the list of other blogs I read. This is in part spurred on by the sudden rush of comments (like 3 in the last week). But really, what better time than the end of the term to do things like this?

In among the links are a couple of non-blog websites. Those are worth checking out too. In particular Philosophy Talk, since it is, in my estimation, a very worthwhile project. I'm not just saying that cause I used to work for them. It is a very good project all around. And now it is on Sundays.

Sunday, December 03, 2006

And just as hard to use...

Here's a class related post that comes out of an explanation I got recently. It is also a bit rough, so be gentle. You can simulate lambda-abstraction in combinatory logic. You want to do this to get some of the lambda-abstraction flexibility without leaving a first-order (with types) framework. Lambda calculus is not first-order because terms (lambdas) can bind variables, which you can't do in FOL. How are lambdas different than quantifiers? One's a term and the other isn't, maybe? I'm not completely clear on that. But, the combinators are first order. The simulated abstraction doesn't work in all cases. It can't with the resources of just combinatory logic. One case it fails is (\lambda x.t)[s/z]=\lambda x(t[s/z]) (From Jeremy Avigad). Why might one want to do simulate the abstraction at all? There is a result that shows there is an isomorphism between proofs and programs, and so types (and categories, apparently). This means that one can think of a proof as a program, which is definable via the lambda calculus. This in turn means that one can think of proofs as terms (formulas? I think terms) in the lambda calculus. What sort of proof though? The terms are natural deduction proofs, with the abstracted terms viewed as undischarged hypotheses. Application is discharging a hypothesis. So, one gets that lambda calculus is the type theoretic analog of natural deduction. What are combinators then? Combinators are the type theoretic analog of axiomatic proofs. The route to seeing this is by looking at the types that make up the combinators. K is p->(q->p) and S is (p->q->r)->((p->q)->(p->r)). These types are structurally the same as the first two axoims of propositional logic (as usually presented; they'll be in there somewhere). The punchline: combinators are the type theoretic analog of axiomatic proofs, and they are just as hard to use. Now, the question is: what is the type theoretic analog of the sequent calculus?

Friday, December 01, 2006

Speakers as measuring devices

In his "Predicate Meets Property," Mark Wilson presents a challenge to the traditional (Frege-Russell) view of predicates expressing or being attached to properties. The challenge is to explain how the extension of the predicate (determined by the property or universal) changes when the usage of the predicate in the community changes. He has a few examples from non-technical usage and a few from the history of science to illustrate this point. E.g., how are the extensions of the predicate "is an electron" different when it was used in Thompson's time and then in the late 20th century. His idea is to make determining the extension of a predicate on a par with measurement in experiments. That is to say that he wants to view speakers as measuring devices of a sort. Measuring devices have a fairly limited range of circumstances in which they will give the correct results and they have an extremely limited range of things they can detect with any accuracy. This means including in descriptions of what speakers are meaning when they make their utterances parameters for, e.g. background conditions. I think I made this sound much less radical than it is. Anyway. I think that the move to viewing speakers as measuring devices fits with Marconi's model of referential competence. The focus in Wilson's article is not on the spooky reference relation; it is on the practice of speakers applying words to things. This description should make clear the connection to Marconi's work. His referential competence is the ability of speakers to apply words to things in accordance with the meanings they associate with the words. In particular, this seems like it could flesh out the further distinction Marconi draws within referential competence of application and recognition.

I've gotten excited about Marconi's book again (last time was September when we had a brief email correspondence) in part due to a post over at Philosophy of brains on Lexical Competence.

Saturday, November 25, 2006

When identity isn't identity, part 2

It just occurred to me that in the previous post on identity in intuitionistic logic, I made a bit of an error (one that survived the suggested correction). The problems with transworld identity are only problems in the context of modal logic. At least, I'm not sure of any problems that arise with identity of individuals across possible worlds in Kripke semantics for intuitionistic logic. There are enough structural similarities between the box and the universal quantifier that I imagine some formula could be cooked up. No clue what a version of Chisholm's problem with transworld identity would look like in terms of witnesses for intuitionistic formulas though. This is a roundabout way of saying that the conclusion of the previous post (about the apparent possibility of problems in classical logic) doesn't pan out. Since classical logic is validated on Kripke frames with only one world, there won't be transworld identity cause there is no transworld to speak of. Alas.

Thursday, November 23, 2006

Citing my sources, of sorts

Some people asked about where the inference in the previous post came from. The two most recent instances I've encountered were in conversation with another student and in the comments on another student's blog. I'm not sure where (whether?) I've seen it in an article or book. However, I have asked my roommate, another grad student, who says he has encountered similar ideas before; it sounded like it was in conversation with other people. I will be tickled pink if I find something like it mentioned in print. If that happens I will straight away put up a citation.

Thursday, November 16, 2006

How does one make this inference?

In the last week or two I've encountered something in several places that strikes me as strange. It is the claim that liking or using formalisms commits one to certain views about the mind. To broaden this a little, we could say commits one to views about anything. The idea seems to be that liking/using FOL or other formal tools commits one to viewing the mind as a Turing machine. I'm not sure how this goes, since it strikes me as both wrong and unsupported. My guess is that the ideas goes:
Turing machines are formal tools;
people that like formal tools tend to like Turing machines;
so, this will commit them to certain views on the mind.
This is clearly not a good line of thought. At best, liking Turing machines might bias you towards one line of thought, although there are at least a few logicians who are quite adept at using Turing machine arguments that think the mind is not one. It would be interesting to see if one finds a greater percentage of those that use formal tools often view the mind as a Turing machine than those that don't use formal tools as much. My guess is no. I'd bet that among computer scientists, one finds a great tendency towards viewing the mind in that way, but I'm not sure if that is relevant.

Monday, November 06, 2006

When is identity not identity?

In Kripke model semantics for intuitionistic first-order logic, it turns out that you can't interpret '=' as the identity relation between objects in the domains. You can interpret it as a mapping from domains to domains.
[Edit: as Aidan pointed out, leaving open the possibility of ~(x=y OR ~x=y), what I said previously, doesn't make sense. The following is what I meant to say.]
The reason is that you need to leave open the possibility that for some x and y, at node a x=y is not true (a doesn't force x=y) but at some node b>=a, x=y is true at b (b forces x=y), which means that a doesn't force ~x=y.
If you want decidable equality you don't have to worry about this. If you're dealing with classical logic, you get excluded middle, so I think you can use the identity relation and get all the puzzles about trans-world identity. However, for Kripke models of classical logic, there is only one world, so only one domain. I'm not sure how the puzzles creep in exactly.

Saturday, October 28, 2006

Diagonal propositions

Stalnaker talked about diagonal propositions as being essential to the informational content of an utterance. According to Perry's discussion of them after his paper "The Problem of the Essential Indexical" in the collection with the same name, the diagonal proposition is characterized as follows: "Consider the case where I say 'I am standing.' Call the token I use t. Suppose, for the sake of argument, that tokens are not necessarily tied to their producers - that the very token that one person in fact produces, could have been produced by others. Now consider the set of possible worlds in which t is true. Be careful. Do not consider the set of possible worlds in which I am standing. In many of these worlds, t will never have been produced. Consider instead the possible worlds in which t is produced by the various people that we have agreed could have produced it. In some of these the producer will be standing. The set of those worlds is the proposition we want. Call this proposition P." It just struck me that diagonal propositions don't seem to be compatible with neo-Gricean or Davidsonian approaches to meaning. The neo-Griceans would object because what is said is determined in large part by the speaker's communicative intention. This would block the step of assuming that t is not necessarily tied to the producer. At best we'd need to restrict to worlds where the producer of t had the same (same type of?) communicative intention as in the actual world, but this will be a smaller subset of the worlds in P. The Davidsonian would object because constructing a theory of truth for a speaker will depend on the rest of the speaker's holds-true attitudes, or preferences. Again, it looks like they would object to the step divorcing the utterance from the producer.

Wednesday, October 25, 2006

Kant: judgment and thought

In the transcendental analytic, Kant gives his table of judgments that comprise all the moments possible in judgments involving concepts that determine objects. He argues that the categories of cognition are structurally identical to those of judgment and all the moments in the latter table have analogues in the former. He gets there with the premise that all thoughts involving concepts are judgments. I think this is underwritten by his claim that the function of the faculty of judgment is the same function as that of cognition. Since we have an exhaustive list of judgment forms, and a premise saying there is a structural identity between forms of thought and forms of judgment, we infer an exhaustive list of thought forms, which is the table of pure concepts. As an aside which adds nothing to this argument: I think this is what Brandom is talking about (or at least an example of what he is talking about) when he says that Kant took the Cartesian project of understanding the mind in epistemological terms and changed it to be an understanding in semantic terms.

Thursday, October 19, 2006

Ridiculous...

I switched to Blogger Beta tonight. The interface for editing posts is a bit nicer so I went through and added titles to all my older posts. About half of my old posts lacked titles cause it took me a while to figure out how to enable that option. I also added a link to the labels for this, which I will add to in the near future. Currently I'm only using "favorite posts" as a label. This is attached to some of my favorite posts, surprise. These are ones I like although they probably aren't the best ones and they are kind of rants at times.

Varieties of inference

Jaroslav Peregrin has a nice book on structuralism and inferentialism called "Meaning and Structure". In it he traces some of the roughly structuralist/holist approach to language through Quine, Davidson, Sellars, and Brandom. These philosophers are characterized by their holistic and inferential approaches to meaning. Of course, they differ on several points, e.g. empiricist or naturalist tendencies and acceptance of intensional constructions. The main thing I want to mention is Peregrin's characterization of Brandom. First some background. Peregrin shows how to take the inferentialist approach to meaning and wed it to the more or less Montagovian approach to compositionality. In fact, he thinks that compositionality is a key feature of structuralism about meaning (the view that structure in the form of syntactic and semantic combination takes pride of place over parts combined). Next, why this is interesting. Brandom's own presentation of his logic and semantics for his inferentialist semantics rejects compositionality. The semantics are recursive, but not compositional, thereby offering a counterexample to Fodor's claim that learnable languages must be compositional. (This is in Brandom's Locke lecture 5.)This is a neat move in and of itself. Peregrin presents Brandom as being compatible with the compositional camp though. Where do they differ?

Brandom has a view on the social dimension of pragmatics that he calls deontic scorekeeping. This is keeping track of who is committed and entitled to what and why. This is the game of giving and asking for reasons, roughly. The logic behind this deals in incompatibles. Each proposition p is assigned a set of incompatibilities, commitment to any member of which undermines entitlement to p and vice versa As an example, the proposition "that is a banana" is compatible with either "that is green" and "that is ripe" but not both together. As another example, one agent being committed to incompatibles can serve as reason for another agent not to endorse the relevant assertions by the first agent. There is a fairly complex picture that emerges. The logic and semantics that Brandom presents for this is not compositional, but it is an interagent affair. The determination of inferential role of subsentential parts is determined using substitution classes which looks like a version of categorial logic, at least superficially (from memory) of the kind that Lewis uses in "General Semantics". This looks compositional. What's non-compositional then? The propositional logic of the inter-agent incompatibility semantics is definitely not compositional. The official version of incompatibility logic doesn't seem to have a first-order version yet. It looks like the categorial logical determination of subsentential inferential contribution is compositional, at least if Peregrin's presentation is as correct as it seemed. However, the logical connectives are no longer compositional and what Brandom is most concerned with is the inferential role of propositions. This will not be compositional as the connectives are not.

What gives? Peregrin's book came out in between Making It Explicit and the Locke lectures, and I think the incompatibility logic was developed in the period after Peregrin's book before the Locke lectures, so he can't be faulted for that. It looks like he gets the inferentialism of Making It Explicit as far as the subsentential parts are concerned. It looks like things shift at the propositional level in the Locke lectures from Making It Explicit, but I'd have to check them both again to be sure.

Sunday, October 15, 2006

Best quote from a math book

In the book "Set Theory: An Introduction to Independence Proofs" there is a section in the intro in which the author lays out a few philosophical positions on set theory, namely platonism, finitism/constructivism, and formalism. The lovely quote is:
"Pedagogically, it is much easier to develop ZFC from a platonistic point of view, and we shall do so throughout this book."

Saturday, October 14, 2006

Linguistic Observations and other nonsense

The talk on Friday gave me the motivation to post something about two linguistic quirks in philosophy.

First, there is the wonderful quirk of prefacing an assertion or response with "Look...". If A raises an objection, B replies, "Look, XYZ". Or, to even raise an objection, A says "Look, your position says XYD". Or, as prima facie evidence, "Look, we all admit the empirical reality of time and space." Etc.

Second, there is the equally wonderful quirk of saying that X just is Y. Usually "just is" is italicized. What are mental states? They're identical to brain states, i.e. mental states just are brain states. Space just is the form of outer sense. Noon just is 12 pm. Identity just is just-is-ness.

I have written over a hundred posts, but a few of them are pithy little notes about non-philosophical things. I think this post will put me at 100 real posts. Woo!

Monday, October 09, 2006

Not that anachronistic

Kant was a constructivist of sorts. He thought one shouldn't use excluded middle for arguments that are supposed to give us knowledge. He thought that in order to have knowledge of mathematical objects they had to be made in intuition (Not sure on the phrasing of this to make it correct; we haven't gotten to that part of the Critique). He also rejected reductio arguments because they didn't bring you into acquaintance with the object/proposition of the conclusion. How delightful! One odd thing is that he didn't think modus ponens was a good rule of inference because it required infinite knowledge of the antecedent in order to affirm the consequent with certainty. But, he did think modus tollens was good since it only required one disconfirming instance. I wonder if he has a view on arguments from the contrapositive. I don't know if they were even considered then.

Sunday, October 08, 2006

Sentences are physical structures

Frank Jackson says repeatedly that sentences (types and tokens) are physical structures. I am confused what he means by this. Just to be up front, I don't think anything in Jackson's argument hinges on this, but it is a very curious assertion. Starting with tokens, it seems fairly clear that they are physical structures. They are either markings on some surface or vibrations in the air or pixels in proper formations of color. Well, if you think that some intentions have to be the causes of these, then that caveat must be added. Now, let's take the sentence "Snow is white", just to stick to something we all know and love. Hand-written tokens, spoken tokens, and digital tokens are all possible, but there isn't anything physical that connects them as being tokens of one thing. On to types. It is hard to see how types of anything are physical structures as types are abstractions. If he means that they are types of physical structures, then he's probably right. They wouldn't be types of any one single physical structure though. This would make them disjunctive I guess. The ontology of language is kind of complex.

As an aside, Stanley Peters once told a funny story about how he gets confused when philosophers talk about sentences. He said that philosophers tend to see them as linear strings of characters, on a board or on paper. Linguists, he says, see them as complexly structured objects, not only in the syntactic dimension, but also in the phonetic dimension. The transition from a wave form to a string of characters or a syntactic structure, let alone a semantic object, is a huge step. While we can idealize away a lot of it (and we do), one needs to stop and ask, is the resulting theory really a theory of anything like the language we deal with? It is nice to know that linguists think stuff philosophers do is weird since I would venture a guess that philosophers think stuff linguists do is weird to some extent.

Friday, October 06, 2006

Philosophy enough

Quine has that famous saying, "Philosophy of science is philosophy enough." While it is quite a strong pronouncement, it isn't one that seemed to impart a great deal of influence on Quine's own writing. He didn't write much about philosophy of science. He often wrote that science should have pride of place in our worldviews, but there isn't much philosophy of science proper. I think the closest he gets is discussing the relation of theory and evidence, which is certainly a bit of philosophy of science. It is sort of like the people that say philosophers should engage in naturalized epistemology, e.g. my hero Van, but don't include long discussions of psychological literature or cite any psychological findings. Gesturing at Psychology, with a capitol 'P', is not enough I think. Davidson, while not buying into this "philosophy enough" business, did "dirty" his hands with some empirical research in decision theory. He published the results in a book with Pat Suppes. It's understandable why one would want to stay away from the research and stick to the pronouncements. The former are frustrating and a huge time sink.

Wednesday, October 04, 2006

Interpretation and representation

One really helpful distinction that Etchemendy makes in his "Concept of Logical Consequence" is between representational and interpretational semantics. Representational semantics, in a model, give ways the world might be given what the words/etc. mean. Thus, a given truth table for a lot of propositions will represent the myriad of possibilities the world could have been depending on whether one or another proposition was true or false. Interpretational semantics are the more standard model theoretic kind. They vary the meanings of terms and give us what would be true given that the words have the interpretation in question. Very roughly, the prior varies worlds and maintains meaning, while the latter varies meaning and maintains worlds.

This gets deployed by Etchemendy when looking at cases where the two conceptions come apart. Representational semantics supports counterfactuals involving differing ways things may be. Would a sentence A&B be true in a world where A held but not B? No. More concretely, would "snow is white" be true in a world where snow is black? No, again. Therefore, it isn't necessary that snow is white. Interpretational semantics tells us that "snow is white" would be true if 'snow' meant tar and 'is white' meant 'is black'. Again, there are clearly interpretations on which this sentence is false.

Representational semantics is what connects to our ideas about necessity. If something is true in all the different situations represented, if the things under consideration are changing, then it is necessary. Interpretational semantics tells us when things are true in virtue of meaning, i.e. no matter what meaning is assigned to the parts of a sentence (holding to type restrictions and other things), it comes out true. We can't get necessity out of this though, cause it involves only meanings with regard to the one domain. Necessity deals with possible worlds, and so we need those in the picture to properly claim that something is necessary. It is pretty easy to slide between the two ideas, especially since they are extensionally equivalent in some cases. The urge is even stronger if we think of models as being worlds, but evaluating sentences using interpretational semantics. We get to have our necessity cake and eat it too.

Tuesday, September 26, 2006

Two-dimensional semantics

Frege had sense and reference. Kaplan has character and content. Jackson has a-in/extensions and c-in/extensions. All of these are supposed to account for the difference in cognitive significance of sentences or terms when the sentences and terms are intensionally different but extensionally the same. Frege's notions don't line up with Kaplan's. For sentences (similarly for terms) senses are propositions that denote truth or falsity as reference. For Kaplan, contents are true or false and propositional. The characters are functions from worlds to extensions. Kaplan's distinctions don't quite line up with Jackson's. The character/content distinction maps pretty well to the c-in/extension distinction, but the character is supposed to account for the cognitive significance of differing sentences/terms with the same content. The a-intension and extension are supposed to do this for Jackson.

It seems like it would be possible to extend Kaplan's theory to Jackson's, but it might mean changing how "actually" works in Kaplan's semantics. (It would also mean accounting for the bit about character just mentioned.) Why "actually"? I don't remember if "actually" denotes whichever world w is in the context or if it denotes the actual world @ regardless of the context. Both views on "actually" are out there, e.g. Jackson likes the former and Perry likes the latter. So, why "actually"? Jackson wants the a-extension of a term to be the extension of the term in a world w, under the assumption that w is the actual world. If Kaplan's view is that "actually" denotes w, then it should work just fine. If his view is that "actually" denotes @, then the a- and c-in/extensions collapse. (The library's only copy of Themes from Kaplan is out.)

The next question is how motivated this is. For Kaplan, the character is roughly the linguistic meaning (roughly). I guess the c-intension is supposed to correspond to that. But, if we move the cognitive significance to the a-intension, how well does the c-intension capture the linguistic meaning? Kaplan's point is that the linguistic meaning of a term is what gives it its cognitive significance outside of its content. I guess the a-intension includes the linguistic meaning for the languages we speak, as well as the linguistic meaning for the languages of all those other people in the possibluum (possibility+continuum=possibluum. That term is from John Perry. He used it to tease David Lewis at UCLA. )

(I debated naming this post "Love 'actually' " but I stopped myself, for better or worse.)

Friday, September 22, 2006

Kantian concepts: abstraction

I think I understand something in Kant. In his Logic, he describes three ways in which one generates concepts: abstraction, reflection, and comparison. You have your concepts, and each concept has an intension and an extension. The extension is the things that fall under the concept (anachronistically understood set theoretically). The intension is set of the differentiating things, marks. Abstraction generates concepts by fiddling with the intension, specifically by removing marks. Since the extension and intension are inversely related (according to Kant), this will increase the size of the extension. This makes sense to me. You start with a concept, say, poodle. You abstract out the breed, you get dog. you abstract out the mammal, you get quadruped, or animal. I'm not sure exactly how the hierarchy goes, but it seems like a fairly intuitive idea.

The thing I'm wondering is whether you get a strict hierarchy or if you get multiple inheritances. Kant says the most general concepts are something and nothing. You can't get any more abstract than that. Fair enough. In my toy example, could one abstract in a different order, say abstract quadruped out before mammal? Maybe there is a strict containment of mammal by animal, but there looks to be a fair amount of room to play within the concept dog. If we have a white poodle, and we abstract out white we have poodle. If, rather, we abstract out poodle, we have white dog. So far Kant hasn't set down any rules about containments and order of abstraction. Maybe white appears under poodle and under dog? Not sure.

I will have to do posts on reflection and comparison when I get through the sections on them in the Logic.

Tuesday, September 19, 2006

Evidentialism

Feldman and Conee's article "Evidentialism" tries to defend the view that epistemic justification for a belief is determined by the quality of the believer's evidence for the belief. They sum this up in a principle EJ:
Doxastic attitude D towards proposition p is epistemically justified for S at t iff having D toward p fits the evidence S has at t.

There's a couple of things about this article that I want to mention. First, the view seems pretty normative. Justification seems like a thoroughly normative notion, whether it is of the epistemic flavor or not. Feldman and Conee say that their view doesn't conflict with Goldman's view (in regards to people believing logical consequences of their justified beliefs) because "EJ does not instruct anyone to believe anything. It simply states a necessary and sufficient condition for epistemic justification." There's a sense in which I agree. EJ doesn't instruct anyone to believe; agents can disbelieve, reserve judgment, whole-heartedly endorse, etc. This isn't what they mean I guess. I take it they mean that it doesn't recommend any doxastic attitude. But, surely this is false. It seems reasonable to assume there are only a finite number of doxastic attitudes one can have. Some of these, say the subset JB, will be justified with regards to p and the evidence at one's disposal at t. Surely EJ recommends adopting one of those attitudes in JB as they are the justified ones whereas the ones not in JB are not justified.

That is in section II. In section III they make the claim: "Having acknowledged at the beginning of this section that justified attitudes are in a sense obligatory, we wish to forestall confusions involving other notions of obligation." Whoa. That makes my interpretation of them from the preceding paragraph look bad. I don't see how this meshes with their previous claim though. They say that there may be non-epistemic obligations that one encounters in forming beliefs, but these seem besides the point. There may be a point I'm missing (it might be the narrow reading of "belief" I discounted above), but it looks like they are asserting contradictory things.

One final observation, they say at the end of section III: "But it is a mistake to think that what is epistemically obligatory, i.e., epistemically justified, is also morally or prudentially obligatory, or that it has the overall best epistemic consequences." This once again asserts the normative nature of their account. But, I want to focus on the last bit of that. They say that epistemic obligation doesn't have the overall best epistemic consequences. Does this mean that it doesn't ensure true beliefs? Fair enough. What are the overall best epistemic consequences then? They don't mention knowledge in the article, so it is doubtful that is what is at issue. They can't be talking about justification, cause that wouldn't make sense. Really, this claim strikes me as weird and based on odd reasoning. It is based on a weakly supported inference from an idea from James's Will to Believe. If you believe God exists even though you aren't justified in that belief, there may be epistemically good consequences. From this they draw the inference that EJ might not promote the epistemically best consequences. If it doesn't why adopt it? Their inference is far too hasty and does some work to undermining their point in the article.

Psychophysical laws

In his arguments for anomalous monism, Davidson uses a premise that there are no strict psychophysical laws. He defines strict laws as the ones that do not have any ceteris paribus clauses. One objection to this is that there are psychophysical laws and that whole industries are built upon them. The best example is anaesthesia. One wants a lawlike connection between the physical effects of the drugs and the mental states (or lack thereof) that follows from them. Whether this is an objection to Davidson hangs on the strictness of the anesthetic "law". If there are no ceteris paribus clauses involved in the effect of not feeling pain, then this is a good example and a good counterexample. If there are (and my guess is this is so), then while there may be a lawlike relation between the physical effects and the mental effects of the drug, it will not be a strict lawlike connection. I don't think that Davidson denied there being psychophysical laws, just strict ones. On this understanding of him (that will be wrong if he does say no psychophysical laws period), the anesthesia counterexample doesn't fly.

Friday, September 15, 2006

Kant: judgments and concepts

Here's the first of a semester long series of attempts to make sense of things in Kant's first Critique. In the introduction to the Critique of Pure Reason, Kant speaks of knowledge until he introduces the analytic-synthetic distinction. When he does that, he switches to talk of judgments. When he does this, he doesn't make it clear whether he's talking about judgments as in acts of judging or the contents of those acts. He seems to go back and forth a little but it looks (from my brief and shallow reading) like he's talking about the contents more. In any case, why the need to switch from knowledge? Maybe this is related to the idea that judgments are things you can take responsibility for, but knowledge is not. I'm really not sure though.

An related thought about concepts is what happens when analytic and synthetic judgments are made? This takes Kant to be talking about acts, but I will put the question out anyway. If you make the a priori judgment that S is P and it is synthetic, then, according to Kant, you are adding to the concept of S the concept of P. So, when you are done, does that mean you have a new concept, say P(S)? Or is it just an assertion that S is linked to P in the synthetic a priori way? If that is so, in what sense are you adding anything to the concept of S? My sticking point is that I don't get in what sense anything is being added. If you make two sequential judgments that S is P, is the second analytic since S contains P from the first judgment, which is synthetic? Switching to the idea that judgments are contents not acts, things are still confusing. Kant makes synthetic a priori judgments sound like something is being added to something. This sounds like a dynamic (in the pre-theoretic sense) and contents seem to be static (again, pre-theoretically). Maybe this is my philosophy of language interest creeping in where it should not creep.

Thursday, September 14, 2006

Minimal semantics and Gricean pragmatics

Suppose that Cappelan and Lepore's minimal semantics is right. This means that most of the things we say are false. No utterance of "I am tall" is ever true (at least on this planet). No utterance of "That is flat" is either. Lots of things we say won't be literally true. This means that most of what goes for what-is-said is not true either.

There's the rub. If most of your propositions expressed as what-is-said are not true, then the maxim of quality goes right out the door. I suppose that one could reason like this: he said "I am tall" but since that isn't true, there must be a further proposition that is relevant that he wanted me to understand. There are two problems with that. One is that there are a lot of propositions in the neighborhood of the one expressed by my "I am tall". A whole lot of them. You'd be hard-pressed to pick out the right one just given that I have expressed something in that semantic neighborhood. Additionally, this would turn most acts of understanding into implicature recovery. I suppose that Kent Bach would be happy with this since his position isn't too far from this line. Actually, I suppose that the relevance theorists wouldn't mind this either since they think that some amount of inference is done in every communicative act. They don't use the maxim of quality though. This might indicate that going minimal about semantics pushes one to abandon canonical (or even neo-?) Gricean pragmatics. There are a lot (most?all?) of details that would need to be filled in before the connections (or lack thereof) between minimal semantics and Gricean pragmatics can be made explicit. It seems telling that Cappelan and Lepore aren't Griceans; and the minimal semantic project is designed to be bolster Davidson's program and Davidson was very much not a Gricean.

Wednesday, September 13, 2006

Doing epistemology

I came out of the M&E seminar today more interested in epistemology than I have been since I first read Descartes and subsequently got my interest pounded out of me by repeated Gettier references. It occurred to me that I'm not clear what we are supposed to be doing in doing epistemology. There seems to be several somwhat related though not identical goals. One is to refure the (Cartesian) skeptic and say "I DO know that I'm not a brain in a vat." Another is to present an analysis of the concept of knowledge. This doesn't need to entail the first point since it might be that our analyzed concept doesn't overcome extreme skepticism. Another is to provide a clear explication of the role of evidence or justification in knowledge claims. This could easily be related to the other two, however unless we are trying to build a foundation for knowledge in the sense of unrevisable bedrock beliefs, it need not. There are at least those three points to epistemology. Each is independent of the other. One can give an account of knowledge without getting into skepticism as long as one's concept of knowledge doesn't include that it must answer the skeptical challenege. The evidence/justification point is indepdnent of skepticism since figuring out what role it plays in knowledge claims doesn't depend on or imply showing how it defeats the skeptic. It doesn't entail analyzing knowledge since we can either work with a concept of knowledge pregiven or generalize the inquiry to be about belief formation. In any case, one can be justified in or have evidence for false beliefs.

I think that trying to handle all three points at once will lead to some confusion. I think I see it in Unger's defense of skepticism although I do not have textual references offhand. There are probably more points to doing epistemology, but these three popped up first. I'll list others as they come to me.

Sunday, September 10, 2006

Information and inference

I had an idea recently that I want to put out there in hopes of making it better at some point. The idea is that there seems to be a link between John Perry's information games and the Brandom-Sellars notion of material inference. Information games are transferring and applying informatoin to one's previously held or newly created information files. Learning new information about Bill let's you put that information into Bill's file. Perry explains more about the information games in several places, in particulatr Reference and Reflexivity. When someone makes an indexical utterance, say "I am hungry," that gives you information about the speaker, i.e. the person who said that is hungry. If you make the further connection that the speaker is Bill, then you are licensed to add that information Bill's file and infer that Bill is hungry. The connection should be clear. Information and files can give licenses to substitute identical singular terms, so from F(a) to F(b) where a=b. This is the model of inference that Brandom discusses. Going a bit further, since informatoin is what the world has to be like given some perceptions and constraints to which the agent is attuned, the proper constraints will allow the agent to infer that Bill hasn't eaten for a while, or from F(a) to G(a), substitution of frames. The description of information as the way the rest of the world must be in order for something to have happened sounds a lot like incompatabilities. Given that p, and p precludes q, if the constraint is accurate then not-q. If you get the information that Ed is a dog, and being a dog is compatible with being a mammal, then the world can contain Ed the mammal. However, this conclusion is weaker than the starting premise since one cannot go backwards from it. The constraint doesn't allow that.

At least, I think the direction from information to inference works. The direction from inference to information is a little trickier. The (in-)compatability and information connection works in both ways, since the sketch of an argumnt given above seems to reverse. I'm not sure how the argument from the symmeteric substitution of terms and and the asymmetric substitution of frames. In other words, I have an argument that info->infer, but I still need an argument that infer->info. Once I come up with that I can figure out what is wrong with this idea.

Saturday, September 09, 2006

Holism and dependence

In their paper "There is no question about physicalism," Crane and Mellor try to undermine one of the arguments that Davidson uses to defend the position that there are no strict laws linking the mental and the physical. Davidson says that the mental is holistic and normatively constrained by rationality. Crane and Mellor attack both these lines, but I will focus on what they say about holism. They say that physical laws are holistic too. The example they give is f=ma. They say that given some value for f, one cannot know the values for m or a. The variables are interrelated in the same way as the various beliefs that one has. They conclude, pace Davidson, that holism doesn't prevent strict laws linking the mental and the physical.

The point that Crane nad Mellor try to make seems to rest on conflating dependence with holism. This is an intuitive distinction, but I think it holds. Dependence would be when a change in one thing necessitates a change in the dependent things. In the case of f=ma, changing one variable changes the quotient or product of the others depending on the variable. Holism seems like a more global claim where in order to know one thing about the system, you have to know a very large amount about it. Holism implies dependence, but not conversely. The difference doesn't come across enough when the "system" is the three variables in the equation f=ma, since it doesn't take much to know a lot about it. I think it can come out clearly in the following way. The variables in the force equation are functionally dependent, so you can have a binary function from two to the third. This function can be curried to produce a unary function to a unary function to the value of the third variable. I don't think that holistic systems are in general functionally dependent. They are relationally dependent, so that knowing the values of 2/3 of the system's variables constrains but does not determine the remaining third. In other words, no functions, so no currying. I will need to get a more detailed specification of holsim to expand this idea, but I think the point is there. Crane and Mellor conflate (inter-)dependence with holism, so I don't think their objection goes through.

Friday, September 08, 2006

Knowledge and reason

In his "Knowledge and the Internal," John McDowell argues that a hybrid conception of knowledge as a combination of a standing in the space of reasons (belief and justification) together with some cooperation from the world in being the way one thinks (truth) is not tenable. One of the reasons given is that it relies on a picture of the space of reasons as "interiorized", that is, as constituted in such a way that no contingency from the world can upset its connections. Anything reached by reason is certain on this picture. McDowell argues that this picture pulls apart the truth condition and the justification condition since one depends on the world and the other on reasons, and the two don't ever meet. He goes on to say that that picture of the space of reasons depends on a scheme/content distinction that amounts to unstructured perceptoins feeding into our conceptual scheme whihc structures the non-propositional input in such a way as to produce propositional contentful perceivings. He thinks this results in a space of reasons devoid of content, or, as he puts it, dark. What is the way around this problem? To bring the world into interaction with the space of reasons. He thinks that the space of reasons is shaped by the world so that when we reach inferntial conclusions, they are quite likely to be right because the inferential connections have been partly constituted by how the world is. At least, that is what I got after one reading.

I like this since it ties together the justification and truth requirements of the JTB account of knowledge. Severing the tie between the two is what gives Gettier cases their purchase. If you remove the gap between them, then Gettier cases shouldn't work. I'm not sure how this move would explicitly stop a Gettier case. Maybe the justification that carries over from the initial premise (Mr. A has 10 coins in his pocket and will get the job) to the existenial generalization (The person who will get the job has 10 coins in his pocket) and the final conclusion (Mr. B has 10 coins in his pocket and will get the job, except he doesn't realize it). At least on the face of it, it seems to go through still. Maybe there is something in the Gettier cases that has a presupposition in conflict with McDowell's picture that I'm missing.

Thursday, September 07, 2006

Quinean tension

In class the other day, Brandom said the Vienna Circle split into two camps about what to do when they discovered that naturalism and empiricism were in conflict. One camp said that naturalism was the doctrine they should adopt. The other camp said that empricism should remain the core doctrine. He characterized Qune as falling into the latter camp which confused me at first. He said that this was because Quine rejected modal notions including dispositions and stuck with the idea of empiricism as a methodology until the end. This partly seemed right because Quine responds to Davidson in his "On the very idea of a third dogma" by saying that we can't reject the content/scheme distinction because then there would be nothing left of empiricism. This partly seemed wrong because the Quine that I remember talked about the primacy of naturalism (no first philosophy) and of recasting meaning in terms of dispositions to verbal behavior. While this is right, the picture of Quine siding with empiricism makes more sense in terms of the overall picture of Quine's philosophy. He did reject all modal notions. He was quite claer about that. I had forgotten that he had also suggested that dispositions should be eliminated in terms of descriptions of the physical structure that underlies the dispositional behavior. This split was a tension in Quine's work that he never really seemed to acknowledge. There should be some places where this comes out very clearly. One of the ones suggested to me was Word and Object; I am guessing it is in chapter 2.

Wednesday, September 06, 2006

Kant problem

I am having great difficulty writing a post on Kant. This could mean one of any number of things. Kant may be too deep a thinker for my pithy blog posts. Or, I might not understand the material sufficiently well.

Tuesday, September 05, 2006

Feyerabend and belief revision

I read Feyerabend's Against Method which was both quite interesting and quite frustrating. I don't think I buy the extreme view of epistemological anarchism he puts forward (I'm not sure if I understand it exactly), but I do buy his line that tackling related sets of problems from wildly varying angles can bring to light facts relevant to the approaches that would otherwise go unnoticed. That seems like the Wittgensteinian point I've mentioned before that philosophers suffer from a lack of examples. Taken a bit further, they also suffer from a lack of alternative view points. Same thing with scientists, so says Feyerabend.

I took his discussion of the lack of method in scientific inquiry to roughly support my complaint about belief revision theories. I'm going to talk about belief revision and theory testing as though they were roughly the same, which I think they are to a reasonable degree. He pointed out how Galileo ignored counter-arguments and evidence and took a bunch of problematic observations to be noise to be explained by future theories. Galileo was very selective about what sorts of things he took to bear on his ideas even though there was a large amount of evidence that didn't support it, even disconfirming it. One of the things Feyerabend emphasized was the interpretations that Galileo placed on the observations and how these differed from the interpretations that people who saw the evidence as refuting Galileo's position. Galileo had to conceptually reorient himself and adopt a theory that made claims about far fewer things than contemporary Aristotelian theory. The basic point I took away from this, in light of my previous discussion, was that experience never interacts directly with our web of beliefs. Interpreting the evidence is always up for grabs, within limits. Quine was right that there are beliefs we can hold come what may, but he was wrong about experience impinging upon the fringes of our network of beliefs.

Monday, September 04, 2006

The mental and the physical

After reading Crane and Mellor's "There is no question of physicalism" I decided that it would behoove me to read the Donaldson articles that they discuss, namely "Mental Events" and "Causal Relations". Both are quite interesting and they had much more overlap than I expected. Both involved discussions of anomolous monism and both emphasized intensional aspects of language in crucial areas. One of the areas in which the extensionality came to the fore was in describing the causal relation. Davdison thinks that the causal is extensional because it is describable in an extensional first-order language. Crane and Mellor think it is intensional because, for probablistic causation, you can have p(a is a)=1 and p(a is the F)=n<1 even when a is the F. Their argument looks like it works for probablistic beliefs maybe, but the examples they give don't look like causal statements. You won't have something of the form "e caused e", since events usually cause distinct events as effects. Within the scope of "... causes ---", the terms do seem extensional, so Crane and Mellor's argument seems invalid. Crane and Mellor need the causal relation to be intensional because they want to show that paradigmatic, purely physical vocabulary has intensional contexts just like paradigmatic uses of mental vocabulary. The alleged difference between the two vocabularies is one of the things Davdson marshalls in support of his thesis that mental events described in mental terms are not connected to physical events described in physical terms by any strict laws.

Sunday, September 03, 2006

The meaning of "meaning"

Do meanings form a natural kind? I really have no idea.

Physicalism: Reduction and Unity

The article "There is no question of physicalism" by Crane and Mellor is pretty good, at least for the first half. They argue that there is no non-vacuous version of physicalism that is true. One thing they say that is in accord with what Dupre said is that physicalism depends on a unity of science thesis that doesn't look like it will work out. I was initially surprised to see that, but after giving it some thought, it makes sense. The grand pyramid of unified science has physics at the bottom and everything else building up on top of it. Each successive layer is reducible to the one below it, ultimately reducible to physics. So the story goes. Mellor and Crane point out that there is no reason to believe this microreduction thesis. There are phenomena that physics studies (Mach's law, special relativity) that do not look reducible or require macroscopic objects. In any case, they argue, it isn't explained what kind of physics we are supposed to be reducing to. The choices are present physics and future physics. If the prior, then we a priori rule out entitites of future physics if that ontology differs from the present's. If the future, how can we say what that ontology will be? That's a convincing dilemma.

The most interesting point of the article was when they pressed the question of why microphysical reduction gets this sort of ontological say. The answer seems to come from Occam's razor: if you can reduce one domain of entities to another, then you should take the smaller domain. But, just because everything has these micro-constituents, does that give us reason to think that the macro-things don't exist? Their answer is no. Their reason is that the microreduction thesis doesn't by itself say anything about the existence of macroscopic objects. In order to get that, you have to bring in auxialiary premises, one of which is a massive application of Occam's razor. But, once you deploy Occam's razor, what force does Crane and Mellor's point have? It looks undermined to me.

Friday, September 01, 2006

Rational Numbers

The title of this post is a bad pun which I hope will become clear by the end of the post. Davidson's argument (it could have been McDowell's verison of Davidson's argument) against their being conceptual schemes which are completely alien to ours and uninterpretable has the premise that in order for something to count as a conceptual scheme, it must be rational. This means that it must be possible to say what is a reason for what in such a way that the structure of reasons and inferential connections in the target conceptual scheme is sufficiently like the home conceptual scheme for an interpreter to recognize rationality in it. If one cannot map someone else's conceptual scheme onto a rational structure akin to one's own, then there is no reason to think it is a rational scheme at all. Thinking about this suggested an analogy to me. Suppose A takes the natural numbers learned as a child as basic, B takes the standard ZFC construction of the natural numbers as basic, and C takes the von Neumann version as basic. Each can interpret the others numbrs, i.e. map the foreign numbers onto theirs in such a way as to provide the right kind of structure. Now suppose D comes along with a completely alien version of things he calls natural numbers. The catch is that none of A, B, and C can map D's numbers to their own in a way that preserves any of the expected structural features. Should A, B, or C say that D is talking about the natural numbers? Seems like the answer is no. Should we think that D is talking about natural numbers, just not those of A, B, or C? Again, no. Does this analogy extend to rationality in the way I want it to? I think so. There are some disanalogous points, mainly that in the case of natural numbers there is more (or at least the illusion of more) objectivity than our conception of rationality. Additionally, it is harder to be a pluralist, in the sense of various equally valid non-intertranslatable versions, about the natural numbers than about concepts of a nonmathematical sort. I think the basic point is preserved in the analogy however.

Thursday, August 31, 2006

Classes

This semester it looks like I will be taking the metaphysics and epistemology core seminar, proof theory, a seminar on naturalism in philosophy, and a seminar on Kant's first critique. I expect that many of my future posts will be on these.

Retrogradation

In at least two places (I believe there are more) in Frege: Philosophy of Language, Dummett says that Frege made a retrograde move. The first of these was in assimilating sentences to the class of complex terms. Sentences refer to objects, just like other terms, in Frege's later philosohy; they refer to the truth and the false. This is opposed to their function in his early philosophy which is to have truth-conditions. What's the difference here? Well, the prior is a matter of reference and the latter is a matter of meaning. That's one big thing. The other is that on the complex term account, sentences lose their distinctive status. They are special in that they are what let us make moves in language games (what about the builders in the Philosophical Investigations?). They are things that we can take responsibility for, be committed to. Terms don't satisfy these functions at all.

The other retrograde move Frege made was characterizing logic as studying truth rather than inference. I'm not sure if the tradition prior to Frege, of which Kant said was a completed science, focused on inference as in consequence. But, I am not familiar with the pre-Fregean tradition much at all. He blames this focus on truth as logic's object of study for the theoretical eddy that was the analytic/synthetic discussion of the logical positivists. Focusing on truth led the positivists to distinguish two kinds of truths, namely the analytic and the synthetic, which could explain why logic was useful and had truths despite having no empirical content. I think this is an interesting observation on Dummett's part. I'd like to check out the pre-Fregean tradition in logic and see how much it focused on inference or consequence over truth (massive display of ignorance here) and then reconsider his point.

Feyerabend and the web of belief

I read an article by John Dupre entitled "The Miracle of Monism" which was about how monism, unity of science, and naturalism are related. In it he cites some reasons that the traditional Popperian falsifiability criterion do not work, which (I think) he attributes to Feyerabend. I was suprised (pleasantly so) to see that a lot of the reasons listed coincided closely with my post on different ways to revise beliefs and Quine's web of belief. This makes me want to check out Feyerabend.

Tuesday, August 29, 2006

Philosophy of language: back to basics

One problem I find myself mulling over a lot is what is the basic meaning bearers or what should be the main focus of attention in philosophy of languge. That isn't quite the best way to phrase it. I think it will get clearer with some examples.

It seems like there are three main candidates for what is the basic bearer of meaning which are related: sentences as types, utterances, and intentions, communicative or otherwise. Sentences are basic in the Kaplan tradition of semantics. Utterances are basic in the Perry tradition of pragmatics. Intentions are basic in the neo-Gricean tradition of pragmatics. Looking at this, one might be inclined to say that sentences serve for semantics and attempt to separate out the remaining candidates for pragmatics. This strikes me as misguided but I'm not sure why at the moment.

There is a way in which all three are related. Sentences are types which are token in utterances. A near equivalence between utterances and sentences can be had by indexing sentences to agents, times, and locations (and worlds maybe) such that that sentence is uttered by the agent at the time at the location in the world. Fair enough. There is some underdetermination since there are many ways to utter a sentence, quickly, slowly, with a drawal, with an English accent, etc. Indexing won't fix that unless it includes a wave form index, but that is just including an utterance type in the index. Intentions can be coupled with utterances since utterances are made by agents with certain intentions, as utterances are actions which are intentional. What is the connection between sentences and intentions? I'm not sure if there is a direct link between the two.

What recommends one over the others? Sentences have the benefit of easily being incorporated into a logic or formal semantics, a la Kaplan and Montague. This is Kaplan's reason for using them, as, to use his great phrase, semantics depends on the "verities of meaning, not the vagaries of action." Why use utterances then? To facilitate pragmatic theory. Perry opts for utterances over intentions and sentences since utterances are physical events with times and locations. This allows them to be carriers of information, a fact he exploits in his more general theory of information and situation semantics. Utterancess are physical events which can carry information about the world given constraints to which interpreters are attuned and beliefs they have. This is used in his project of providing a naturalized basis for information. Intentions are basic if one is inclined to follow the Gricean line, like Sperber, Wilson, and Neale. The Gricean line is that saying or meaning is a form of intention recognition. Meaning is conveyed just in case the speaker's communicatve intention is picked up on by the interpreter. What do utterances and sentences do on this picture? Utterances provide a nuanced way for interpreters to get at the communicative intention; sentences are just the types involved in said utterances, I guess. This can be incorporated into a sophisticated theory of pragmatics, like Grice's own, which is a point in its favor although not clearly better than utterances as many philosophers and linguists working in pragmatics take utterances as basic.

Clearly, taking one as basic precludes taking the other two as basic. At this point I just wanted to lay out something I see as foundational issue in philosophy of language that I don't know of a solid answer to. There are a couple of other options that I didn't include in my discussion that might be worth including in the future: propositions and inference. As far as meaning bearers go, inference is an option and propositions are not. Propositions are candidates for meanings, not bearers of them I think. The question as to which of sentences, utterances, and intentions are basic can still arise for inference. Additionally, I see inference as competing mainly with reference for foundational status, so I didn't include it here.

Monday, August 28, 2006

Back...

I was on a brief hiatus while I moved and set up my new place. Classes start soon which means I will get back into the swing of philosophy (some might say the very slow swing) which will result in more posts soon.

Tuesday, August 15, 2006

Logic, logic everywhere...

I like logic as much as the next guy, but sometimes philosophers and logicians do things that strike me as ridiculous. Sometimes they act as if there was a need to create logics for everything, as if nothing was understood unless we have a formal calculus for it. This is not a complaint against all formal systems. The various forms of counterfactual logics are interesting and worthwhile, although how much light they shed on problems of modality I cannot say for sure. The main offender that springs to mind is from a group of papers, which might be growing, on the logic of fiction. I'm doubtful of its efficacy for two reasons. One, I don't think there is enough of a concensus on how to understand fiction for there to be any sort of enlightening logic thereof. The relatoin of fictional discourse to non-fictional discourse is still a bit shaky. Two, I am doubtful that a logic will shed any sort of light on issues that people interested in literature will care about. The best case scenario that I foresee as a live possibility is that somehow computer scientists or someone working on contradictory independent discourses will use the stuff for some completely unrelataed project. There are more offenders. The best summary of this thought that I have heard was from a linguist, Dan Jurafsky. He said that in graduate school, computer science, they would spend their time coming up with new formal systems and that it ended up being the biggest intellectual morass he had seen.

This may not have been fair to the logic of fiction people, but there are a couple of necessary conditions for creating formal systems that I think were not met. One is a degree of conceptual clarity and another is a clear motivation. I'm not feeling either of them.

Monday, August 14, 2006

Methodology

It is often said that there is no one philosophical methodology. Kant said in the first critique that philosophers were foolish to treat their arguments as if they had mathematical rigor. I'm not sure if this has the same force now that it did then since the methods of mathematics and philosophy have changed somewhat. Both have become a bit more rigorous, although I am not familiar with the gritty details of pre-19th century mathematics.

Quine and Davidson argued that there are three dogmas of empiricism: the analytic-synthetic distinction, verificationism, and the scheme-content distinction. For a time, it was considered a knock down criticism of an idea to show that it presupposed the analytic-synthetic distinction. I'm not sure that it was ever that damning to show that something was verificationist. While in some areas, such as semantics or confirmation theory, verificiationism is untenable, there are some areas that seem to get along pretty well with it, i.e. intuitionistic logic and its ilk. Are there areas in which the scheme-content distinction is employed in this way? Davidson used it against Quine, but I think Quine changed his stance on some things afterwards. McDowell uses it some in Mind and World. Would it be a viable program to investigate what doctrines either presuppose one (or more) of the dogmas or entail one (or more) of them? I am thinking of something analogous to recursion theory. In recursion theory, there is something of a method to showing that some particular problem is not computable: show that a solution to that problem would yield a solution to the halting problem. Another, related method is to show that there is a diagonalizaiton argument applicable to the given problem which shows it to be uncomputable. While showing that a given doctrine presuppoes or entails a dogma wouldn't be as final a conclusion as the results in recursion theory, I think it would be illuminating. It also depends on how persuasive one thinks the arguments against the dogmas are. One example of a doctrine about which I wonder if it presupposes a dogma is situation semantics and the scheme-content distinction. There is some talk in situation semantics about categorizing a metaphysical heap (so to speak) with concepts, which sounds like scheme and content.

Another idea is to see what theses are equivalent to the dogmas, in roughly the same vein as one shows equivalences between various forms of the axiom of choice in set theory. For example, Quine argued that the analytic-synthetic distinction was equivalent to Carnap's language-internal/-external problem distinction. Some things Davidson said made it sound like the scheme-content distinction is equivalent to the myth of the given. Again, this might prove illuminating.

One problem with this idea is that if one does not have some allegiance to empiricism, it might not hold much water as criticism. For example, rationalists were probably not terribly moved by Quine's arguments against the analytic-synthetic distinction. At least, Sellars holds one version of the distinction and Brandom uses it without pause. Even so, I think there's something worth looking into with this idea.

Thursday, August 10, 2006

Action and meaning

(Just a disclaimer, this is a rough post.) One of the biggest contributions that Berkeley made to philosophy (according to John Perry, and I tend to agree) is that he emphasized the importance of the connection between experiences and action. There needs to be a tight connection between one's perceptions and beliefs in order to explain actoins, intentional or otherwise. Some of the things that play very large roles in the formation of new beliefs and intentions for action are the meanings of words. Suppose that meaning is given by reference relations between words and (other) things a la 'cat' means cats. How could meaning explain action? The meanings would have to have an interface with the agent's mind, but this is lacking since the objects are for the most part non-mental. You get objectivity for free, but you need to bridge the gap for explaining action. I suppose one way to do this is with mental representations, but this seems like it would lead to other problems like regresses or skepticism. Maybe a better response is to say that in grasping a meaning, the agent takes the meaning and puts it into a form that is amenable to mental interactions. She forms representations? (I'm sure there is a literature on this, but I haven't read it.) Next we have to make sure that the representations accurately mirror nature.

To approach this from another direction, if meanings are inferential connections, then it seems like we can explain the links with actions quite easily. Meanings are mental entities of a sort; they are connections among beliefs. This gives us the connection to intentions which makes it easy to see how meaning of this sort could figure into an account of action without needing to invoke representations of external relations. This gives us the action-intention link, but it presents a pickle of a problem: objectivity seems hard to get. I'll need to go back through Brandom's arguments for objectivity in Making It Explicit and write a post or two about it.

Tuesday, August 08, 2006

Primitive truth

I just noticed the option of displaying a title field when posting. I'm going to try to have titles from now on.

One of the things that I find both interesting and frustrating about Davidson's work is how he treats the concept of truth as primitive. He takes it for granted in a great many things, saying that without a concept of truth one cannot have these other concepts. There is something right in that I think. For example, linking a notion of truth with a notion of objectivity, like he does, seems like the right order of explanation. I'm not sure you can get all the mileage out of truth that he tries to, but it is a good thing to keep primitive.

There are two things about it that I find frustrating. First, he never explains how we are supposed to learn the concept of truth. He could make it practically ineffable by saying that it is an innate concept that our we cannot do without rationally or biologically. This would be a bit extreme. It is really hard to see how one would teach it to a small child since it is pretty abstract, but small children do get a grasp on it. McDowell brought up a similar point with the concept of rationality. How does one learn that concept? They require a sort of meta position to observe them in behavior, and to see that you already have to know what to look for. They can be demonstrated in practice, to a point, but there are a whole lot of theoretical concepts that would equally well apply to these patterns. This leaves things somewhat mysterious.

The other reason I'm frustrated with the primitive concept of truth is one of the reasons given for saying that truth is undefinable or must be primitive. Davidson cites Tarski's theorem that truth for a language is undefinable within the language on pain of inconsistency. Tarski's result is perfectly clear. What isn't clear is by what right Davidson imports that result from FOL into English. English certainly contains its own semantical words. Even if we regiment English into a theoretical fragment for use in our truth theories, we still haven't shown any sort of translation or correspondence between it and FOL that would allow for theorems in one to be used in the other. Maybe Davidson is idealizing away a lot of things in English such that he is left with a pseudo-formal calculus in the spirit of FOL. This would be a big idealization if it were made, and it would still be in as much need of justification as the current use of Tarski's result in plain English.

Monday, August 07, 2006

Syntactic theory

When I took syntactic theory, Ivan Sag said something that seemed mysterious at the time but now, after working through a book on formal language theory, makes a great deal of sense. He said that older syntactic theories were working with mathematics from the 40s and 50s and were basically sophisticated string rewrite systems. What did he mean? The transformational grammars were based on context-free grammars of the variety inspired by Emil Post's work. These enable one to take a string and apply a series of generation rules that change the string. When combined with transformation rules, one can change the order of substrings in a stirng, including adding in null elements. There is no deeper structure to it that this however.

Friday, August 04, 2006

Conversational impliciture

Kent Bach has this idea he calls "conversational impliciture." It comes out of his reading of Grice. He says that Grice posits a constraint on what is said that each part of what is said should correspond (be expressed by?) a syntactic element of the sentence. What does that get us? If you have a sentence that looks "semantically incomplete" then without the constraint you would be tempted to say that the missing semantic elements are there, just unexpressed or unarticulated. Bach says these things aren't part of what is said since they don't have a corresponding syntactic realization. They are implicit in the conversation though. Rather than enriching what is said, he suggests a new layer, conversational impliciture, which is the material that is implicit in a given utterance. This meshes with some of his recent stuff because he does not think that utterances of full sentences always express complete thoughts or propositions. He is all for accepting semantic incompleteness.

Here are three quick, related questions I have about his view. First, how much different is it than, say, the unarticulated constituent view of Perry? Both have implicit, assumed information making its way into propositions. (I'm pretty sure conversational impliciture is propositional.) The main difference seems to be that Perry puts it in what is said (or the locutionary content, which, I believe, is his preferred term these days) while Bach leaves what is said semantically incomplete and puts it somewhere else. Does this difference end up coming to much though?

Second, how does impliciture interact with implicature? Is it used in the derivation of implicatures? Grice says that what is said is used to determine what the implicatures are, in combination with the maxims. I suppose it becomes a part of the background knowledge, but it makes what is said seem otiose. If we have an implicature whose derivation requires using the impliciture, which is an enriched version of what is said, then what is said is doing exactly 0 work in the derivation. It only serves to get us to impliciture, then drops out. It does not seem difficult to construct cases in which this would happen, say an implicature depending on someone not having eaten today when the person says "I have not eaten." This example needs a lot of fleshing out before it becomes convincing, but hopefully one can see where it is heading. This leads to the third question.

What exactly is the difference between implicitures and implicatures? They seem to require very similar mechanisms in their determination. Why not say that impliciture is an intermediate step in the derivation (if I may use these terms like the derivations were well-defined) of implicature from what is said? There doesn't seem to be any good demarcation line between the two concepts. Of course, an unclear boundary doesn't mean that the concepts are worthless (Quine didn't win that fight in all cases) but it would be good to clarify.

Tuesday, August 01, 2006

Sellars on meaning and inference

In his "Meaning and Inference" Sellars makes a distinction between material inferences and logical inferences. Logical inferences are classically valid and proceed from 'p->q,p' to 'q', while material inferences go from 'p' to 'q' based on their material contents. This distinction is also made in Carnap's Logical Syntax of Language as L-inferences and P-inferences respectively. The paper goes through a detailed discussion and criticism of Carnap's work, which more and more seems to be something I wll have to go through. I will mention a couple of points that Selalrs makes that interested me.

First he insisted on the rule part of 'rule of inference'. He focused on the normative force that rules have and how they are involved in action. Reformulating a rule in such a way that it does not indicate an action is to eliminate the rule in favor of a description of the circumstances in which the rule can be applied. This seems basically right. His example is: X is arrestable =_{def} X broke a law. The left-hand side features a permissable action while the right-hand side features only a description of a state of affairs. He discusses logical necessity as being embodied in rules of inference. Something is logically necessary only if it conforms to a certain inference pattern. This is where he ties together modality and normativity with the phrase that Brandom deployed so well: Modality is a transposed language of norms. Sellars makes a lot of connections between object language and meta-language phenomena. He also shows how counterfactuals within a language can be eliminated if one endorses certain rules of inference in the metalanguage. One of my favorite comments made by Sellars is about the confusion of regularity with following a rule. Both are learned behavior, but there are many differences between the two and ignoring these differences has led to a lot of nonsense about ostensive definition, in roughly his words.

Second, one of the surprising things was how much importance was placed on counterfactual reasoning. The counterfactual inferences that one endorses show what content is assigned to each of the descriptive terms in the inferences. The fact that counterfactual reasoning is not classical and that the subjunctive conditional is not a material implication is seen as a feature and not a bug, pace Quine.

Third, there was a brief argument at the end about how modal and normative vocabulary does not assert psychological facts although it conveys them (I think that is how he put it.). This results in the claim that modal, normative, and psychological vocabulary are not reducible to each other. I did not follow this argument, but it was surprising to see so much of "Between Saying and Doing" show up in some guise in this essay.

Fourth, Sellars seems to launch sustained criticisms of empiricism in his work, particularly of the logical positivist kind. While he praises Carnap a great deal, I think he successfully showed that a functional natural language cannot do without P-inferences and intensional counterfactuals. Carnap thought that extensional L-inferences would do the job just fine. Granted, Carnap was making artificial calculi not natural languages, but he did think that the artificial languages were capable of being adopted by people as a natural language (according to Sellars).