Three week hiatus while I travel
Wednesday, June 28, 2006
Why do we get to single out a single causal chain as the one of interest in theories of reference like Kripke's theory of names? There are many many causal chains conneting us to lots of things. The farther removed in space and time the supposed end of one chain is, the more stuff there is causally intervening between it and us. Whenever one learns a name or a natural kind term, there are causal connections to things beside the person or stuff named. There are causal connections to things in the air, pollution, sound waves, light waves, nearby objects, (possibly) distant objects. Yet, for some reason we are allowed to single out one particular chain of causes as the one forming the basis of reference. Is this just for narrative convenience? It would make quite a convoluted story to cover everything that causally intervenes between the ends of the "chains."
Posted by Shawn at 2:58 PM
Tuesday, June 27, 2006
In "Hermeneutic Practice and Theories of Meaning" Brandom asks a pair of questions: what can hermeneutics tell us about theories of meaning and what can theories of meaning tell us about hermeneutics. Theories of meaning are those things that Dummett, Davidson, (some readings of) Frege, and Montague were interested in, theories of truth-conditions and compositional construction of sentence meaning from their parts. I'm not as clear about hermeneutics since that tradition didn't feature much in my time at Stanford. The key hermeneutic figure in the essay is Gadamer. The essay is interesting because it asks how these two traditions can inform each other. I suppose this is done with different traditions in different places.
Part of the answer that Brandom gives (at least what seems like part of an answer after a quick reading) is that theories of meaning can serve as an input to hermeneutic practice although they will not exhaust it. This seems basically right. There are interesting phenomena that come to light when you start looking at passages rather than sentences, e.g. inter-sentential anaphora, discourse anaphora. There are much more interesting pragmatic effects. Really, pragmatics doesn't even start until you get beyond the single sentence in isolation. As I've said before, single declarative sentences have gotten too much attention. John Perry and Stanley Peters have floated the idea that answerhood for questions is intrisically pragmatic, relativized to information available to the speaker and interlocutor as well as the speaker's intended aims.
This is all well and good, but what does the input do for hermeneutics? It constrains the sorts of interpretations that can be placed on the texts. It does this by specifying conceptual and propositional content. Interpretation is acheived by supplying auxiliary premises (the context) to supplement the text to form de re ascriptions. These ascriptions will be liscensed just in case the premises justify them together with the text. De dicto ascriptions are also allowed and provide contrast to the de re ascriptions. By navigating between the web of inferential consequences de re and the web de dicto, the interpreter can figurer out where the conceptual differences lie and where everyone is in agreement. So, this is more or less a consequence of the theory in Making It
If I understand the essay correctly, the theories of meaning tradition supplies constraints to interpretation and the hermeneutic tradition supplies problems for interpretation.
Brandom fleshes out the idea of interpretation as an interplay between de dicto and de re attitude ascriptions. He thinks that you can attribute a dictum to someone as an entry into the conversation. You can say "What did he do?", in response to "I'm mad at him," as your first move in a conversation to which you came late and in which you do not know who 'he' refers to. You understand what is said when you can move from the de dicto ascription "X said that he is mad at him" to the de re ascription "X said of Y that he is mad at him." This can be done even if X wouldn't put it that way. Navigating between the ascriptions that you would make as a deontic score-keeper and the ascriptions that the speaker would accept allows the score-keepers to find the differences in premises that are accepted by the various parties involved. Being able to switch from a de dicto ascription to a de re ascription is interpretation because the de re interpretation is put in your own words, that is, it need not be in words that the ascribee would accept.
How does this differ from Davidson's (radical) interpretation? Score-keepers don't seem to be constructing a theory. They are just keeping track of commitments and entitlements and they are not trying to match up sentences with values and holds-true atttitudes. This isn't quite right however because the holds-true attitudes are fleshed out in Brandom's account as acknowledgment of commitments. The holds-true attitudes are in there, in a form, so this might not be the point at which the two theories differ.
I wonder what Wittgenstein would think about Calvin ball. Calvin ball is a game invented by Calvin of the Calvin and Hobbes comic. The only rule to Calvin ball is that there are no rules. What counts as a move in the game? Pretty much anything. But, is this a problem? If anything counts as a move, does nothing count? I'm inclined to say no. There are not conflicting rules or conflicting interpretations being appealed to. There aren't distinguishing rules being appealed to either. Wittgenstein would probably think that there is too little structure to the "game" for it to count as a game.
Sunday, June 25, 2006
Why should one do semantics? There is a quote from Davidson (I think I read it in the early chapters of the Heim and Kratzer text) that says roughly, semantics should not tell you anything you don't already know as a speaker of the language. My question is obviously directed at specific kinds of semantics, namely formal semantics in either the Montagovian or Davidsonian paradigm. It applies somewhat to other kinds though. If these theories aren't telling us anything about the meaning of sentences that we didn't already know, what is the reason for pursuing them? There are a few technical kinks to work out in different places. In the Montagovian tradition there are problems with type mismatching and non-intersective adjectives. In the Davidsonian tradition there are problems with context sensitivity.
I suppose that philosophers are interested in semantics because it is supposed to make clearer some philosophical problems. The best example of this would be to help in coming up with a general theory of meaning. By creating semantic theories for different languages, we can see what sorts of properties they share, and so what sorts of properties meaning itself has.
Another shot at an answr is that by getting clear about what the formal meaning of a sentence is, then we have taken a step at getting natural language into a form on which we can do serious logical inquiry. We can start proving theorems and finding out what follows from what. This is an extension of the Fregean-Russellian idea about the logical form of language.
Maybe producing clear, formal truth-conditions for sentences will help us make sense of philosophically loaded words like 'necessity' or 'virtue'. This seems pretty doubtful.
Studying syntax, as linguists do it, gives (or is purported to give) a glimpse of the furniture of the human mind, so even if nothing else comes of syntactic theory, it will have given us some more understanding about the mind. Of course, it looks like other stuff will come of syntactic theory, so it is doing pretty well for itself.
Wednesday, June 21, 2006
One of the central parts of Brandom's theory is the propositional contentfulness of assertions. Since he is attempting to give an account of meaning in terms of inference and use, he is not allowed to use semantic vocabulary to define propositions. He doesn't think they are Russellian, since he doesn't want his account to depend on the notion of reference. They aren't sets of possible worlds, since that would divorce them from use. They aren't abstract, structured entities, I don't think. That would also separate them from use. I'm not quite sure what they are, although I'm doubtful that the specific ontological status of propositions matters for the story he's telling.
Posted by Shawn at 6:19 PM
Saturday, June 17, 2006
One of Brandom's more surprising claims in chapter 7 of Making It Explicit is that deixis presupposes anaphora. The argument starts with how his theory distributes propositional content. Content consists in what inferences are entailed by a proposition and what inferences are incompatible with it. Once you have sentence-level content, you can abstract out subsentential contents, of terms and predicates, by substitution. Deictic (and indexical) terms are token unrepeatables. Two utterances of one type are not guaranteed to be coreferential. It is a safe bet that they won't be, actually. Since token unrepeatables are unrepeatable, they can't be used in inferences, e.g. "That dog is brown, therefore that dog is brown" is not a good inference. It looks like deictic terms have no content unless they can be linked through an anaphoric chain. Brandom calls these recurrence structures or chains. Once you have an anaphoric mechanism in place, you can create recurrence relations that allow for inferences based on deictic terms that have the deictic terms has the antecedent origin of the anaphoric chain. This lets them be correctly used in inferences and therefore acquire content. Therefore, deixis presupposes anaphora.
I don't think he is using the terms 'deixis' much differently than is standard. He does say that his discussion of anaphora focuses on different issues than the ones most linguists are interested in. He is interested solely in applications of anaphora, what it means for somehting to be anaphoric on something else. The way he characterizes what interests linguists is as what are the rules for determining the proper antecedents of anaphoric terms. I think it is a linguistic universal that languages have deictic words for the first-person and places. He wants to say that these presuppose anaphora. This is sort of a weird claim. Is it enough for a counterexample to show that there can be a language that has deixis but not anaphora? It would have to be a full-fledged language, not an impoverished language game like the builders in Wittgenstein's Philosophical Investigations. This is because the builders fall outside the field of Brandom's inquiry. He limits himself to rational linguistic practice characterized as a game of giving and asking for reasons. The builders can't give reasons or ask for them, so they are excluded.
To get back to Brandom's claim, I'm not sure what to think. Anaphora is tightly linked to the pronoun systetm, and the pronoun system is also tightly linked to deixis. Of course, there are anaphoric phenomena that are not pronoun-based or deictic, and vice versa. I think Quine might have been right about the claim that these things come in bundles of concepts. You acquire the first part of the pronoun system with some basic deixis and anaphora. What is really needed is an example of anaphora that is clearly prior to deixis. Where would this come from?
Thursday, June 15, 2006
Brandom's reading of Frege's Grundlagen leads him to an interesting take on the Julius Caesar problem. He thinks that if you have an indirectly defined term fX, like X has the same direction as Y iff X is parallel to Y, then if you hold P(fX) and X=Y, you will be committed to the substitution P(fY). You will not be committed one way or the other to the substitution P(Z). The reason is that he thinks that Frege's requirement on understanding identity statements is too strong. Frege required that the sense of all identity statements be settled, in principle, before you could use a new term like fX where X is an old term of the language.
I'm not sure exactly why, but this leads Brandom to the conclusion that something can't be referred to by a singular term unless it can be referred to by many singular terms. This is a sort of term holism: being able to use one means being able to use many. This one will require a lot more thought on my part.
Monday, June 12, 2006
Some people think that we can separate narrowly semantic knowledge from world knowledge. This is the difference between the entries in a dictionary which give the meanings of words and the encyclopedia knowledge that constitutes our knowledge of facts. The prior sort of thing will have the essential details of, say, cats to explain 'cat'. The latter sort of thing will include informatoin about the evolutionary history of cats, the sounds they make, their common status as pets, their fabled love of chasing mice, and other things. These bits of information are not essential to the meaning of 'cat', so the story goes. It seems like this line of thought presupposes a strong analytic-synthetic distinction. If we can sort of the difference between meaning-constituting and non-meaning-constituting bits of information, then we have a way of specifying what is true in virtue of meaning and what is true in virtue of fact. This is a dubious idea. The moral we should draw is that there is no sharp line between dictionary knowledge and encyclopedic knowledge. As Marconi puts it, there is no perfect or complete encyclopedic competence.
Friday, June 09, 2006
Unger has an argument that nothing is flat. He thinks this because 'flat' is an absolute term, meaning that something is flat iff there is nothing flatter than it, or there is no way it could be flatter. Since we can be pressed into saying that there is something flatter than whatever object we are talking about, we can never truthfully say that something is flat. This mixes semantics and metaphysics in an unfortunate way. Somehow we determined the meaning of a word such that we can never use the word to assert a simple, positive truthful proposition, i.e. not a negated proposition or an embedded proposition. This is kind of mystifying. It is like one of the problems I have with Cappelen and Lepore's ideas. We can never mean what we say, since we mean truthful things but most of what we say will end up false. Of course, their response is that we express truthful things in the total speech act content of an utterance, but this isn't particularly convincing. This route also isn't open to Unger because I don't think he buys into the speech act pluralism of C&L.
Thursday, June 01, 2006
Some people think that what constitutes the answer to a question is an entirely semantic matter. I think Sag, Ginzburg, Gronendijk, and Stokhof all hold this view. For them, answerhood seems to be a function of the semantic content of the question.. But, there are thtings this seems to leave out. What constitutes an answer, at least in part, depends on what the information the asker has is. If I know that the height of an average community member is 5' tall and I ask if community member C is tall, then your answer of 'C is 6' tall' will be an answer. Semantically though, my question and your response are independent. Another problem with a purely semantic account is that it leaves out the possibility of non-verbal answers. For example, a silence after a doctor asking "Did he make it?" regarding a patient is an answer. A shrug is also an answer. In the case of a shrug or a shake of the head, one might say that the agent indicated a propositional content. I don't think this is a good route to take, but in the case of a silence, it doesn't seem tenable at all. My conclusoins: pragmatics are very involved in questions and answerhood.