Another view of philosophy, one which I will attribute partially to Ken Taylor (Although, I am not sure if he adheres to this. It has partially come out of things he's said and I use a phrase by him; that's why I'm partially attributing it to him), is to use a problem to mark off a region of logical space and walk through all the solutions to the problem in that space. When considering a problem, one should exhaustively canvass the solutions to see which has the greatest number of theoretical virtues in addition to providing the best solution to the problem at hand. This conception of philosophy has its charms. It makes it clear what the problems are. It can also be a bit tiring to write or read because there are usually so many options to go through. Does this view of philosophy leave anything out? I am inclined to think that if philosophy were like this as a whole, it would leave Wittgenstein out completely. Whether this is a bad thing I set aside. It also doesn't seem to work well with holism. If I'm a holist about something (meaning, scientific theories, truth, etc.) then lightening the theoretical burden in one area of my theory will (usually?) require me to lean more on another part of my theory. This view of philosophy also would have the potential to miss overarching insights and connections. If I am working on a particular problem with a particular set of possible solutions, then there is a very good chance that I will not notice the similarities to another problem or a different area of philosophy, e.g. the connections between mind and language or the reprecussions of agency theories on ethical theories.
Sunday, April 30, 2006
The tests for context sensitivity in Cappelen and Lepore's Insensitive Semantics involve disquotational indirect speech reports of the form 'X said that S'. Is there anything special about 'says that' that makes it better to use than any other attitude verb? What about 'thinks that' or 'believes that'? This is kind of tricky because I'm not sure if any propositional attitutde ascriptions can come out true on C&L's theory. For example, even if I say 'It is raining', I don't believe that it is raining (full stop). I believe it is raining here, e.g. It seems that they want to use 'says that' because it is, at least prima facie, the most relevant verb for the notion of what is said. It doesn't seem to generalize to other kinds of reports though. What exactly is 'says that' tracking that 'believes that' isn't?
Saturday, April 29, 2006
What kinds of ambiguity are there? Offhand, there seems to be syntactci, semantic, pragmatic, and phonological ambiguity. Syntactic ambiguity is the standard kind of structural ambiguity. 'I saw the man with the telescope' is ambiguous between 'with the telescope' modifying 'the man' and 'saw'. The underlying structure of the sentence is ambiguous between two or more different structural trees. Semantic ambiguity seems to be limited to scope ambiguities between operators and quantifiers. I'm not sure if there is any other kind of semantic ambiguity. Pragmatic ambiguity is a little less clear. We might say that it happens when we aren't sure which object is the one being pointed at or which is the referent of a name. This might be semantic though. Another kind of pragmatic ambiguity is one that John Perry pointed out. An agent might not be sure where to stop a chain of Gricean reasoning or it may be unclear which maxim to being the reasoning with. An example of the prior is this. If we begin with 'he is coming right for you', the agent might want to reason all the way to 'X is coming for Y' where X=referent of 'he' and Y=referent of 'you'. Or, the agent might want to stop once X is identified. There may not be any reason to go all the way to full propositional content. It might make more sense to stop before that. Phonological ambiguity might not be a real class of ambiguity. The idea is that an utterance must first be disambiguated into its proper word breakdown. For example, the spoken form of 'That boy spat' might be mistakenly understood as 'That boy's Pat'. Determing which sentence has actually been uttered will have consequences which spill over into the other three categories mentioned. This makes it sound pretty imiportant. I'm not convinced that it is a distinct class of phenomena over and above the others.
Thursday, April 27, 2006
This one will be meta-philosophical. One view of philosophy, that of Sellars, is that it should explain how things, in the broadest sense, hang together, in the broadest sense. This seems to me to be neutral as far as the relation between science and philosophy. It also seems to me to be a conception of philosophy that is required to pay attention to intuitions of non-philosophers. For example, it should explain why, if there is no moving 'now', there is such a compelling feeling of one. Now, one question I have is when we are allowed to explain away pre-theoretic intuitions and when they can be dismissed. Certainly this kind of philosophy is required to explain why certain intuitions held sway. But, it seems like some intuitions should just be dismissed. Is there a clear distinction between the two sets of intuitions?
Posted by Shawn at 6:26 PM
I think it is a linguistic universal that all languages have deictic words. They all have first-person pronouns and words like 'here' and 'now'. This makes sense since these are very useful and it would be hard to get along without them. It would be interesting to see what indexical/deictic words are universal. It would also be neat to figure out what other indexicals there could be but are not found in any actual languages. For example, we could easily create a me-complement which refers to everybody besides the speaker. Would any others be useful? Would any others be philosophically interesting? Maybe plural indexicals that specify numbers of people, like '2-we' for a group of two people. One question that would be worth exploring is whether there are any indexicals which are not found in English but are in other languages and what the differences are. There is the usual example of tu/vous in French which carry different levels of familiarity with the addressee, but these are accommodated to 'you' in a straightforward way. Are there any more exotic ones?
Wednesday, April 26, 2006
I take it that formalists about math and logic say that these endeavors are just manipulations of meaningless symbols in specified ways. How does a formalist interpret existence proofs such as the one used in the completeness proof of modal logic?
Tuesday, April 25, 2006
For better or worse, the sciences (at least some?) are often taken to have, maybe even to take, the goal of uncovering truth. Questions arise at how scientific theories that are revised or thrown out get at the truth or an approximation of truth. It seems to me that in engineering endeavors, truth does not enter into the picture. At least not the standard Tarski-style version of truth or any other standard philosophical theory of truth. On my rough characterization of engineering, what works is considered to be most important. This leads to an idea of 'hacks', most often seen in computer science contexts. If your program is not working quite right, an ugly variable name and assignment can fix it. Theoretically this is move, using such a variable, is rather ad hoc, but it gets the job done which is the main point. Similarly, if you are building a mouse trap and a part isn't working quite right (having never built a mousetrap this will be vague) you can slap some extra glue and a reinforcing piece, for example, on to fix it. Truth doesn't enter into the picture. What works (praxis?) is the important concept. I think this is underappreciated in philosophy. That being said, I'm not sure how it fits into many philosophical theories since engineering doesn't fit into many philosophical theories. Not yet at least.
What are the necessary and sufficient conditions for a lexical item's being an indexical? I'm thinking about this to answer a question that Bill Lycan raised in conversation. What differentiates an indexical from a deictic term more generally? Indexicals are a subset of deictic terms, but the converse is not true, i.e. they are not the same set. 'Come' is not an indexical, but it is deictic. This is an important question to answer for the debate about the semantics/pragmatics distinction. Maybe Ken Taylor is on the right track when he says that not all context-sensitivity is the same. There can be things that are context-sensitive, like 'I', which fall into the domain of logic and semantics. Then there are things that are speech situationally sensitive, like 'this', which fall into the domain of action and pragmatics. I don't quite understand what differentiates a speech situation from a context, but it seems like a promising idea. Perry makes a distinction along these lines. The discourse situaiton is the situation (in the situation semantics sense) in which a speaker makes an utterance. A resource situation is a situation that is salient and accessible that supplies information needed to understand (not sure if this is exactly right) the meaning of some of the terms of the utterance. Resource situations are needed to understand demonstratives, for example. The two sets of distinctions seem similar although Perry's distinctions are purely for semantic purposes while Taylor's are for pragmatics as well.
Wednesday, April 19, 2006
At least in one tradition, interpreting the meaning of a speaker is involved, in large part, with assuming that she is acting in accordance with the cooperative principle and the Gricean maxims. This is fine as far as conversations go. These are some genearl guidelines for interpretation in those settings. The interpreter tries to identify the speaker's intentions and combine those with what the speaker said. Doing this puts her in a positiion to apply the normal Gricean reasoning (although she may not even need what is said) to understand what implicatures were meant. What happens when we shift to text? I don't think the cooperative principle applies to authors. It is even more doubtful that the maxims apply in their normal way either. For one, they are conversational maxims. Secondly, different facets of text seem most relevant for interpretation. I doubt that authors always try to be as informative as possible. Afterall, that is what makes detective novels so exciting. What principles guide textual interpretation? We don't even have to be talking about literary interpretation (which is probably at the harder end of the scale). Interpreting newspaper articles can be difficult. Are there any general heuristics or maxims for textual interpretation like the ones Grice suggested for conversation?
On Grice's theory of conversational implicature (Bach's interpretation thereof), only utterances by a speaker have implicatures. Implicatures are determined by the communicative intention of the speaker. What role does the hearer/interpreter play in this process? Does she constrain the possible implicatures that the speaker can intend? If an implicature is inappropriate for the audience, can the speaker seriously intend for it to be picked up? One thing that makes me uneasy about this picture is that implicature generation seems to be due to the speaker and her intentions while implicature retrieval (my phrases) seems to be due entirely to the hearer. After all, the speaker won't retrieve her own implicatures; they were for the interlocutor. In what sense is an implicature generated if the hearer doesn't pick up on it? Further, if the speaker intends the listener to pick up on one implicature, via the maxim of manner for example, but she picks up on a reasonable conclusion drawn via the maxim of quality, is that an implicature? I'll have to go back through Grice and Bach to figure that out. The latter case might be an impliciture; although, I must confess, I don't really understand that concept well.
Tuesday, April 18, 2006
Diego Marconi argues that there are two aspects to lexical competence, the inferential aspect and the referential aspect. The inferential aspect is the part of meaning that lets us draw inferences that, while not logically valid, are what he calls semantically valid. Brandom might call these inferences materially good. An example is the inference from `x is a cat' to `x is an animal'. This is one aspect of lexical competence, the intralinguistic part. There is another part that is needed, the connections between the words and the world. This is the referential part. This consists in being able to identify, in the case of `cat', that something is a cat. I'm not sure, but I think these two aspects exhaust lexical competence. Of course, different agents might have different degrees of each of these. The trained scientist might have a highly developed inferential knowledge about goats while a shepherd might have a particularly good referential knowledge of them.
This picture is good as far as it goes, but there are some problem with it that worry me. How do these two aspects work with function words like `of' or `in'? I'm not sure how to refer to `in'-ness. Maybe the individual function words don't have these aspects but the clauses they form when combined with other words do. How does it work with `I' and `that'? Clearly getting the reference wrong for `I' woud indicate that one does not understand the word. How does one recognize that someone else is using it correctly? Once I understand it, wouldn't I expect other people to use `I' to refer to me? The most straightforward answer is that something like Perry's roles are involved. It isn't the reference of `I' that matters for understanding so much as the role. Another problem is with expressive words like `ouch' and `oops'. These don't, by my lights, have any referential content. They have some inferential content. I guess that Marconi would say that competence for these words is constituted by the inferential content since there is no referential content, a kind of limiting case on one end of the spectrum. What falls at the other end of the specturm? Names?
Husserl had a concept he called noema that was a generalization of meaning to, in his words, the realm of all action. There seems to be something basically right about the idea. It makes some sense to say that there is a kind of meaning that action, of any kind, has that is similar to linguistic meaning, which one kind of action, talking/uttering/etc., has. There is something deeply confusing about it though. At least with linguistic meaning, we have a decent idea of what the parts are, some relations of parts and wholes, fairly good heuristics for interpreting them, and natural ways to demarcate the parts and wholes. None of this is the case with action generally. For example, people are pretty bad at understanding 'the meaning' of an action, say, a bombing. Who was the target, what was the purpose, what ideas were under attack, what ideas were being promoted, etc. are al questions that are more or less equally answerable from a given bombing. Additionally, the parts of an action are extremely hard to differentiate. Apart from this, it seems like Husserl had a good idea.
Posted by Shawn at 1:21 AM
Monday, April 17, 2006
Fodor's denotational conception of the lexicon says that 'dog' means dog, and so on for the words in the lexicon. All of these meanings are atomistic, meaning that they have no internal structure and that they have no inferential relations to other words. I haven't read any of Fodor's main works explaining his theory, although I have read Concepts. A couple of questions just occurred to me. Am I correct in assuming that Fodor's lexicon has the following entries: 'I' means I; 'those' means those; 'yes' means yes; 'of' means of. These are pretty obviously wrong, so I'm curious what his theory actually says.
Here are some questions that I think are open and that are worth either coming up with answers to or coming up with reasons to think they are bad questions:
What is a language? How do we differentiate between languages?
What are the necessary and sufficient conditions for being one word?
When are word meanings identical? When are they similar?
I am somewhat suspicious of Lewis's answer to the first question. It is an answer, but it does not sit well with me to think that languages are functions like he described. They might represent languages, but I'm not sure how well that will work. I think it would be a more reasonable position if it were supplemented with an account of how to go from language in Lewis's sense, which concerns semantics, to pragmatics. He is concerned with semantics, which gives him a more logical trajectory. I think an answer to my question should address this, but it should point towards something more in line with the philosophy of action. I don't know if there are any good answers to the other questions.
Posted by Shawn at 1:17 PM
Sunday, April 16, 2006
How are methodological assumptions defended in philosophy? They can be defended in, say, physics by the accuracy of predictions, confirmation or disconfirmation of theory, or overall productivity of a research program. In philosophy none of these hold water in the same way. If straightforward argumentation is the way to defend them, then how are methodological assumptions different than any other assumptions? I'm thinking of Grice's Modfied Occam's Razor that says you should not multiply semantic posits beyond necessity. What sort of thing counts as justification or evidence of that?
Posted by Shawn at 1:55 PM
Thursday, April 13, 2006
The beginning of Insensitive Semantics is motivated by an appeal to Kaplan's "Demonstratives" in which they cite the curious fact that he didn't give a reason for restricting his attention to the set of words that he discussed. I'm surprised that this uncritical interpretation of Kaplan is used for any motivating reason. He was looking at a different problem and had a different goal in mind. The set of words he looked at, which is smaller than many people realize, is just that set that works well in his logic without intentions. This fact cannot be used for any defense of semantic theses. It shouldn't motivate the project in Insensitive Semantics either.
There are two ideas that seem to come up in tandem often in philosophical theorizing, reflexivity and iteration. The liar paradox if reflexive, while theories of truth that try to deal with it via a hierarchy of truth, with some distinguished class of truths being 'grounded', are iterative. Common knowledge is iterative, i.e. all agents know X, they know that they all know it, etc. This is iteration of mutual knowledge. The canonical interpretation of Grice's theory of implicature posits an iterative hierarchy of communicative intentions. An agent intends for an audenience A to recognize X, intends for A to recognize that intention, etc. Bach put forward an interpretation of Grice's theory that dispenses with the iterative hierarchy and replaces it with an reflexive intention. The reflexive communicative intention is satisfied by its recognition. I will have to go back to Grice's articles to see how this squares with what he said, but it is appealing. We can dispense with an infinite hierarchy and replace it with an intention that refers to to itself. Does this create a paradox? The reflexive paradoxes (e.g. the liar) are brought about by conditions that are mutually contradictory. The sentence's being true implies its falsity, and the converse.. Or, in the case of Russel's paradox, a set's containing itself implies that it does not contain itself, and the converse. What happens if we move this to intentionality? If I intend that my intention is satisfied only by its non-recognition, then I would be in trouble. However, it is does not seem like this could be a communicative intention, since at the very least those must be recognized to be satisfied. It looks like Bach's interpretation is safe from that.
As an aside, both Recanati and Bach provide differing interpretations of Grice that take their views on language to very different places. It would be worthwhile to read through both of their writings and work out exactly how their views differ and how these differences play out in their ideas on, say, pragmatics.
Wednesday, April 12, 2006
If I understand Lewis's counterfactuals correctly, 'if X were the case, then Y' is evaluated with respect to the nearest set of possible worlds such that X is the case. What happens with counterfactuals whose antecedent is 'If I were you'? There aren't any worlds in which I am you. I don't imagine there are any worlds where our counterparts are identical either. Do these all turn out to have false antecedents? How about ones like 'If Harry were Sally', or any that have directly referential terms in the antecedent like that? It is ad hoc to come up with a special semantics for those cases, but they don't seem to fit the mold of the 'If F were G' cases.
Sunday, April 09, 2006
How do T-sentences for ambiguous sentences work? 'Adam saw her duck' is true iff Adam saw her duck. 'Bill was at the bank' is true iff Bill was at the bank? Do the T-sentences have extra semantic or syntactic information encoded in them that differentiate the readings of the sentences?