Saturday, October 28, 2006

Diagonal propositions

Stalnaker talked about diagonal propositions as being essential to the informational content of an utterance. According to Perry's discussion of them after his paper "The Problem of the Essential Indexical" in the collection with the same name, the diagonal proposition is characterized as follows: "Consider the case where I say 'I am standing.' Call the token I use t. Suppose, for the sake of argument, that tokens are not necessarily tied to their producers - that the very token that one person in fact produces, could have been produced by others. Now consider the set of possible worlds in which t is true. Be careful. Do not consider the set of possible worlds in which I am standing. In many of these worlds, t will never have been produced. Consider instead the possible worlds in which t is produced by the various people that we have agreed could have produced it. In some of these the producer will be standing. The set of those worlds is the proposition we want. Call this proposition P." It just struck me that diagonal propositions don't seem to be compatible with neo-Gricean or Davidsonian approaches to meaning. The neo-Griceans would object because what is said is determined in large part by the speaker's communicative intention. This would block the step of assuming that t is not necessarily tied to the producer. At best we'd need to restrict to worlds where the producer of t had the same (same type of?) communicative intention as in the actual world, but this will be a smaller subset of the worlds in P. The Davidsonian would object because constructing a theory of truth for a speaker will depend on the rest of the speaker's holds-true attitudes, or preferences. Again, it looks like they would object to the step divorcing the utterance from the producer.

Wednesday, October 25, 2006

Kant: judgment and thought

In the transcendental analytic, Kant gives his table of judgments that comprise all the moments possible in judgments involving concepts that determine objects. He argues that the categories of cognition are structurally identical to those of judgment and all the moments in the latter table have analogues in the former. He gets there with the premise that all thoughts involving concepts are judgments. I think this is underwritten by his claim that the function of the faculty of judgment is the same function as that of cognition. Since we have an exhaustive list of judgment forms, and a premise saying there is a structural identity between forms of thought and forms of judgment, we infer an exhaustive list of thought forms, which is the table of pure concepts. As an aside which adds nothing to this argument: I think this is what Brandom is talking about (or at least an example of what he is talking about) when he says that Kant took the Cartesian project of understanding the mind in epistemological terms and changed it to be an understanding in semantic terms.

Thursday, October 19, 2006

Ridiculous...

I switched to Blogger Beta tonight. The interface for editing posts is a bit nicer so I went through and added titles to all my older posts. About half of my old posts lacked titles cause it took me a while to figure out how to enable that option. I also added a link to the labels for this, which I will add to in the near future. Currently I'm only using "favorite posts" as a label. This is attached to some of my favorite posts, surprise. These are ones I like although they probably aren't the best ones and they are kind of rants at times.

Varieties of inference

Jaroslav Peregrin has a nice book on structuralism and inferentialism called "Meaning and Structure". In it he traces some of the roughly structuralist/holist approach to language through Quine, Davidson, Sellars, and Brandom. These philosophers are characterized by their holistic and inferential approaches to meaning. Of course, they differ on several points, e.g. empiricist or naturalist tendencies and acceptance of intensional constructions. The main thing I want to mention is Peregrin's characterization of Brandom. First some background. Peregrin shows how to take the inferentialist approach to meaning and wed it to the more or less Montagovian approach to compositionality. In fact, he thinks that compositionality is a key feature of structuralism about meaning (the view that structure in the form of syntactic and semantic combination takes pride of place over parts combined). Next, why this is interesting. Brandom's own presentation of his logic and semantics for his inferentialist semantics rejects compositionality. The semantics are recursive, but not compositional, thereby offering a counterexample to Fodor's claim that learnable languages must be compositional. (This is in Brandom's Locke lecture 5.)This is a neat move in and of itself. Peregrin presents Brandom as being compatible with the compositional camp though. Where do they differ?

Brandom has a view on the social dimension of pragmatics that he calls deontic scorekeeping. This is keeping track of who is committed and entitled to what and why. This is the game of giving and asking for reasons, roughly. The logic behind this deals in incompatibles. Each proposition p is assigned a set of incompatibilities, commitment to any member of which undermines entitlement to p and vice versa As an example, the proposition "that is a banana" is compatible with either "that is green" and "that is ripe" but not both together. As another example, one agent being committed to incompatibles can serve as reason for another agent not to endorse the relevant assertions by the first agent. There is a fairly complex picture that emerges. The logic and semantics that Brandom presents for this is not compositional, but it is an interagent affair. The determination of inferential role of subsentential parts is determined using substitution classes which looks like a version of categorial logic, at least superficially (from memory) of the kind that Lewis uses in "General Semantics". This looks compositional. What's non-compositional then? The propositional logic of the inter-agent incompatibility semantics is definitely not compositional. The official version of incompatibility logic doesn't seem to have a first-order version yet. It looks like the categorial logical determination of subsentential inferential contribution is compositional, at least if Peregrin's presentation is as correct as it seemed. However, the logical connectives are no longer compositional and what Brandom is most concerned with is the inferential role of propositions. This will not be compositional as the connectives are not.

What gives? Peregrin's book came out in between Making It Explicit and the Locke lectures, and I think the incompatibility logic was developed in the period after Peregrin's book before the Locke lectures, so he can't be faulted for that. It looks like he gets the inferentialism of Making It Explicit as far as the subsentential parts are concerned. It looks like things shift at the propositional level in the Locke lectures from Making It Explicit, but I'd have to check them both again to be sure.

Sunday, October 15, 2006

Best quote from a math book

In the book "Set Theory: An Introduction to Independence Proofs" there is a section in the intro in which the author lays out a few philosophical positions on set theory, namely platonism, finitism/constructivism, and formalism. The lovely quote is:
"Pedagogically, it is much easier to develop ZFC from a platonistic point of view, and we shall do so throughout this book."

Saturday, October 14, 2006

Linguistic Observations and other nonsense

The talk on Friday gave me the motivation to post something about two linguistic quirks in philosophy.

First, there is the wonderful quirk of prefacing an assertion or response with "Look...". If A raises an objection, B replies, "Look, XYZ". Or, to even raise an objection, A says "Look, your position says XYD". Or, as prima facie evidence, "Look, we all admit the empirical reality of time and space." Etc.

Second, there is the equally wonderful quirk of saying that X just is Y. Usually "just is" is italicized. What are mental states? They're identical to brain states, i.e. mental states just are brain states. Space just is the form of outer sense. Noon just is 12 pm. Identity just is just-is-ness.

I have written over a hundred posts, but a few of them are pithy little notes about non-philosophical things. I think this post will put me at 100 real posts. Woo!

Monday, October 09, 2006

Not that anachronistic

Kant was a constructivist of sorts. He thought one shouldn't use excluded middle for arguments that are supposed to give us knowledge. He thought that in order to have knowledge of mathematical objects they had to be made in intuition (Not sure on the phrasing of this to make it correct; we haven't gotten to that part of the Critique). He also rejected reductio arguments because they didn't bring you into acquaintance with the object/proposition of the conclusion. How delightful! One odd thing is that he didn't think modus ponens was a good rule of inference because it required infinite knowledge of the antecedent in order to affirm the consequent with certainty. But, he did think modus tollens was good since it only required one disconfirming instance. I wonder if he has a view on arguments from the contrapositive. I don't know if they were even considered then.

Sunday, October 08, 2006

Sentences are physical structures

Frank Jackson says repeatedly that sentences (types and tokens) are physical structures. I am confused what he means by this. Just to be up front, I don't think anything in Jackson's argument hinges on this, but it is a very curious assertion. Starting with tokens, it seems fairly clear that they are physical structures. They are either markings on some surface or vibrations in the air or pixels in proper formations of color. Well, if you think that some intentions have to be the causes of these, then that caveat must be added. Now, let's take the sentence "Snow is white", just to stick to something we all know and love. Hand-written tokens, spoken tokens, and digital tokens are all possible, but there isn't anything physical that connects them as being tokens of one thing. On to types. It is hard to see how types of anything are physical structures as types are abstractions. If he means that they are types of physical structures, then he's probably right. They wouldn't be types of any one single physical structure though. This would make them disjunctive I guess. The ontology of language is kind of complex.

As an aside, Stanley Peters once told a funny story about how he gets confused when philosophers talk about sentences. He said that philosophers tend to see them as linear strings of characters, on a board or on paper. Linguists, he says, see them as complexly structured objects, not only in the syntactic dimension, but also in the phonetic dimension. The transition from a wave form to a string of characters or a syntactic structure, let alone a semantic object, is a huge step. While we can idealize away a lot of it (and we do), one needs to stop and ask, is the resulting theory really a theory of anything like the language we deal with? It is nice to know that linguists think stuff philosophers do is weird since I would venture a guess that philosophers think stuff linguists do is weird to some extent.

Friday, October 06, 2006

Philosophy enough

Quine has that famous saying, "Philosophy of science is philosophy enough." While it is quite a strong pronouncement, it isn't one that seemed to impart a great deal of influence on Quine's own writing. He didn't write much about philosophy of science. He often wrote that science should have pride of place in our worldviews, but there isn't much philosophy of science proper. I think the closest he gets is discussing the relation of theory and evidence, which is certainly a bit of philosophy of science. It is sort of like the people that say philosophers should engage in naturalized epistemology, e.g. my hero Van, but don't include long discussions of psychological literature or cite any psychological findings. Gesturing at Psychology, with a capitol 'P', is not enough I think. Davidson, while not buying into this "philosophy enough" business, did "dirty" his hands with some empirical research in decision theory. He published the results in a book with Pat Suppes. It's understandable why one would want to stay away from the research and stick to the pronouncements. The former are frustrating and a huge time sink.

Wednesday, October 04, 2006

Interpretation and representation

One really helpful distinction that Etchemendy makes in his "Concept of Logical Consequence" is between representational and interpretational semantics. Representational semantics, in a model, give ways the world might be given what the words/etc. mean. Thus, a given truth table for a lot of propositions will represent the myriad of possibilities the world could have been depending on whether one or another proposition was true or false. Interpretational semantics are the more standard model theoretic kind. They vary the meanings of terms and give us what would be true given that the words have the interpretation in question. Very roughly, the prior varies worlds and maintains meaning, while the latter varies meaning and maintains worlds.

This gets deployed by Etchemendy when looking at cases where the two conceptions come apart. Representational semantics supports counterfactuals involving differing ways things may be. Would a sentence A&B be true in a world where A held but not B? No. More concretely, would "snow is white" be true in a world where snow is black? No, again. Therefore, it isn't necessary that snow is white. Interpretational semantics tells us that "snow is white" would be true if 'snow' meant tar and 'is white' meant 'is black'. Again, there are clearly interpretations on which this sentence is false.

Representational semantics is what connects to our ideas about necessity. If something is true in all the different situations represented, if the things under consideration are changing, then it is necessary. Interpretational semantics tells us when things are true in virtue of meaning, i.e. no matter what meaning is assigned to the parts of a sentence (holding to type restrictions and other things), it comes out true. We can't get necessity out of this though, cause it involves only meanings with regard to the one domain. Necessity deals with possible worlds, and so we need those in the picture to properly claim that something is necessary. It is pretty easy to slide between the two ideas, especially since they are extensionally equivalent in some cases. The urge is even stronger if we think of models as being worlds, but evaluating sentences using interpretational semantics. We get to have our necessity cake and eat it too.