Thursday, May 25, 2006

Frege-Searle hypothesis

In Sag and Ginzburg's Interrogative Investigations, they talk about the Frege-Searle hypothesis, which says that commands, assertions, and questions all have common contents but just differ in force. For example, 'Open the door', 'The door is open', and 'Is the door open?' all have the same content and different force. Sag and Ginzburg didn't rehearse the arguments in favor of this hypothesis, so I checked to see what Searle and Frege had to say. Searle asserts this hypothesis as correct at the start of Speech Acts. He gives an argument somewhere (according to someone who knows Searle) that says the reason is that the commands and questions can be paraphrased as assertions using the same proposition/sentence. There is some confusion of sentences and propositions I think. Conclusion: Searle's support for it is bad. Frege says that polar questions and assertions clearly have the same content. The questions are just requests for assertions of the proposition or the denial of the proposition. It isn't clear that he thinks commands have propositional content. Frege thinks that wh-questions are like unsaturated propositions, awaiting the filler which is whatever will answer the question. Conclusion: Frege's support for it is bad. Why was this hypothesis ever accepted if the inital arguments for it were so bad (they were more of assertion than arguments)? My guess is intuitive appeal. The questions, commands, and assertions seem similar, on the face of it, so it makes sense to think they have something in common. What? Propositional content. Why? No clue.

Wednesday, May 24, 2006

Ontology, huh?

There is something I don't understand about ontology. Suppose you have an ontology that contains, say, propositions and questions (which are proposition abstracts) and an ontology which contains just propositions. What is the difference between the two? All of the questions in the first can be created, represented, or mimicked in the second by abstracting from propositions. What does the difference between them come to? If there is no difference, then what difference is there between an ontology which contains just the real numbers, which can represent everything in the world (physical stuff; they can't represent the power set of the reals), and an ontology which contains real numbers and physical objects?

The bounds of competence

In the last chapter of Lexical Competence, Marconi proposes a sort of thought experiment for investigating the bounds of linguistic competence that seems like it could be fruitful. Consider a natural language processing system that does not seem to understand language. Next, consider what needs to be added to that before you would attribute it competence. The response 'understanding' is ruled out at the start because it would not get us anywhere. Could anything be added, say, to Searle's Chinese room to get it to the point we would attribute it competence? Marconi's own answer is that a visual perceptual system of sufficient intricacy would help it acheive a minimal competence. It still has problems, e.g. in cases where the concepts involved don't rely on visual or otherwise perceptual cues. It is an interesting start. To get the philosophical gears going, we can start to make the systems different from the Chinese room or a computer in a box. How about an android with perceptual systems that have all five senses? How about a thermometer? How about a robot that does not have any apparent linguistic output? These are a few cases it would be fun to kick around.

Tuesday, May 23, 2006

What makes a semantic interpretation?

In Ch. 3 of Making It Explicit, Robert Brandom asks a very interesting question. He asks what it is that makes a mapping from one set to another a specifically semantic interpretation of something. For Tarski's explanation of quantifiers in terms of topological closures, he says that it is because it defines a notion of logical consequence that is appropriate to the idiom that Tarski is using. So is it defining logical consequence that is necessary for something to be a semantic interpretation? No. He says that it is taking thing as sentences which means treating them as susceptible to the rules of reason, evaluation in terms of truth, and usable in derivations in new inferences. He goes on to say that formal semantics is semantics only if it presupposes being hooked up to some kind of appropriate pragmatics. He contrasts this picture with what Lewis does in "Language and Languages" and what Stalnaker does in Inquiry. I'm not clear on the details of the contrast, so I will come back to that later.

Situations and semantic ontology

One interesting aspect of situation semantics is the insistence that traditional ontologies for semantics are not adequate for the job they purport to do. This comes out in two ways. First is the richer ontology. Some semantic theories try to explain attitude reports in terms of just facts or just propositions. Situation semantics uses facts and propositions to explain the meaning of expressions. When coupled with the utterance-based approach and the partial situations (i.e., not the whole world), this leads to a lot more explanatory power. If you are trying to explain something, why not look to see what entities seem to be needed, posit those, then cut back once you've explained things and it looks like there are ways to remove entities. The alternative seems to be select one class of objects and try to explain everything in terms of them, e.g. propositions.

The other way situation semantics differs from other semantic theories is that it uses a non-wellfounded set theory developed by Peter Aczel. I don't know the details of it, but it doesn't restrict sets to be wellfounded. This is a huge change, but I'm not sure how it plays out. I wonder how much of the use of that set theory was influenced by Aczel being at Stanford with the people that developed situation semantics.

Saturday, May 20, 2006

A distinction that I like

There was a distinction I read about, pertaining to meaning, that I rather like. It is a distinction between the theory of meaning and meaning theories. I believe this was due to Christopher Peacocke. A meaning thoery is a theory of the meaning of words in a given language. I believe this encompasses a theory of compositionality for a given langauge. Each language has its own meaning theory. The theory of meaning is the general theory that covers the general features of each individual meaning theory. One idea is to construct the theory of meaning, you make enough meaning theories so that you can look at regularities across them. This seems sensible enough.

Friday, May 19, 2006

Fallbacks for semantic theory

It seems like one reason to try to mark off a distinction between pragmatis and semantics is so that one can tell when pragmatics can be used to account for a phenomena and when semantics should be used. A lot of semantic theories (and fragments) seem to treat pragmatics like this buffer that absorbs and accounts for all problems with a semantic theory. It is hard to argue against something if it can always be defended by an appeal to some vague, pseudo-Gricean reasoning. Of course, the distinction between semantics and pragmatics might be theory-relative so that it will be impossible to produce a single, comprehensive distinction.

Wednesday, May 17, 2006

Contextuals and score-keeping

One thing that Lewis's theory in "Score-keeping in a Language Game" could shed some light on is the class of words that Cappelen and Lepore call 'contextuals'. These are words like 'foreign', 'local', 'enemy', and 'import' that aren't indexicals but seem to depend on some feature of the context of usage to supply some of their semantic content. What feature isn't clear uniform across the board. They also don't seem deictic in the way that 'these', 'that', and 'coming' seem deictic. That is just my intuition, and it isn't based on any deep theoretical observation. One thing that Lewis's theory claims is that the truth values of utterances of a sentence can vary depending on what the score is when the utterance occurs. For things like 'I know that Oswald shot JFK', this seems loony. For something like 'I am an outsider', this seems less loony. Depending on whether the score is set to the nation, state, county, town, or group, this can variously be true or false. There are two questions I have. Was Lewis trying to come up with a justification for using a non-monotonic logic? Are the scores that Lewis is talking about better modeled by something like circumstances and situations in situation semantics? The former is influenced by the use that Brandom makes of Lewis (the influence I see at least, although I haven't read Making it Explicit). The latter is influenced by the fact that situations model primarily local phenomena, and the scores seem to do that as well.

Tuesday, May 16, 2006

A well run game in communication and semantics

The theory Lewis puts forward in "Score-keeping in a Language Game" is both compelling and weird. There are two things that he seems to be supplying. One is a partial theory of communication. The other is a relativization of semantics. I think the two are mixed too much. The truth-conditions of utterances shouldn't change that much based on how the conversation develops. This makes truth-conditions hyper-intensional. Not only do truth-conditions depend on the context in a deep way (not that bad), but they also depend in a big way on the order in which assertions are made. Presuppositions and truth-conditionally relevant presuppition failures I can see doing this; normal truth-conditions, I have more trouble seeing doing that.

I'm not sure how Lewis understood "a well-run conversation." Conversations aren't much like baseball games. They don't have much in the way of constitutive rules. They have some normative rules, I suppose. These are more presuppositions or assumptions that conversants have going in: truthfulness and trust on Lewis's picture. But, this is not enough to make a scoreboard. Nor does the analogy with baseball rules work well with conversations. Many of my conversations are sprawling, involving abrupt topic changes and winding discussions or diverstions that may or may not come around to reconnect with the original topic. This is fine and does not seem less well-run than a straightforward exchange of tightly related assertions. I'm not sure if he meant this in the analogy or not, but the rules of baseball underdetermine possible plays. There are scenarios not described in the rules that must be adjudicated on a case by case basis. I suppose conversations are like that, fewer rules and more outlandish situations. Was that intended in the analogy? I'm not sure, but I think the analogy already broke.

Monday, May 15, 2006

Deontic scorekeeping in dynamic logic

This is just a small idea post. Would it be possible to model something like Brandom's deontic scorekeeping in a dynamic epistemic logic framework? You have assertions that change the model. You would need to add entitlement and commitment operators. I think commitment operators could be defined as something like the logical consequence relation. I'm not sure what the mechanics behind entitlement are, so I can't suggest anything for that. It might provide an interesting new interpretation to the standard dynamic epistemic logic interpretation. I'm not sure how the accessibility relation would need to be changed though. This will require reading Making It Explicit.

Thursday, May 11, 2006

Competence and performance

There is an old distinction in syntax (dating back to at least Chomsky's Aspects) between competence and performance. Competnece is the theory of what an idealized speaker would say is grammatical. This includes sentences that are too long to ever be spoken and sentences whose modifiers are so confusing that it would take too much processing power for us to understand them. These are things that shouldn't be ruled out because their faults are not grammatical. intuitively, they are things that a grammar shouldn't account for because they are, at root, dependent on non-grammatical aspects of the speakers and interpreters. Performance deals with sentences that speakers and interlocutors actually produce and deem grammatical or not. This can be a little messy since sleepiness, alertness, and intoxicatoin can inluence performance, and those things are not grammatical. This distinction bites both ways however. It allows syntacticians to say that certain sentences that are either ruled in or out by their grammar are okay by shifting the focus more to the performance side or more to the competence side. They can lean a little one way or the other to bolster their theories. Leaning too much to the competence side threatens to divorce the theory from empirical matters completely. Facts about speakers do seem to matter some to how syntax is processed. However, leaning too much towards performance results in sentences becoming ungrammatical according to the theory even though they are grammatical. (As an aside, I'm not sure whether there is a theoretical account of grammaticality. I've only seen it presented as an intuitive concept). I wonder if there is a similar competence-performance distinction for semantics. Such a distinction might help clarify what should be accounted for by a semantic theory and what should not be. Sometimes I think that the pragmatics-semantics distinction covers this, although pragmatics accounts for a lot more. It isn't a perfect analogy, but it seems to get something right.

Monday, May 08, 2006

Recanati's contextualism and compositionality

Is Recanati's brand of radical contextualism compatible with the thesis of compositionality for natural languages? I take the thesis of compositionality to be (roughly) that the meanings of sentences are composed out of the meanings of the parts of the sentence. The modulation of senses that Recanati discusses in Literal Meaning is certainly compatible. It is roughly what Pustejovsky talked about in his Generative Lexicon. The sense of one word is changed to match, or covaries (I think that is his term) with, another word, e.g. in polysemy. This is compositional since the meanings of the covarying parts together go into the total meaning of the sentence. Saturation is also compatible with compositionality. Free enrichment doesn't seem to be compatible with it since there is no limit or constraint on how the enrichment is carried out. Now, the enriched meaning goes into the total meaning, so it is compatible in that sense. However, the meaning of the word is not constrained to match its standard meaning. This seems to violate compositionality in one stage of the prepropositional processing. At the least, there is no systematic way to build up to the enriched meaning from the base meaning in a compositional manner.

Views of philosophy (III)

Another conception of philosophy is as the study of argumentation. On this view, philosohpers aren't concerned so much with the content of an argument as the validity of it. Content comes into play when assessing the soundness, but validity is primary. This view is probably useful pedagogically. When looking at texts, you can reconstruct the argument, assess it for validity, raise questions about its validity and/or soundness, and try to patch the argument. This is close to the practice of philosophy, at least at the undergraduate level. However, this view is sort of like formalism in mathematics. It makes philosophy seem like a logic game that one plays. It is also hard to understand what philosophical progress is and why different strands of philosophical thought have been explored. It gets something right, namely the importance of carefully and properly assessing arguments.

Sunday, May 07, 2006

Views on language

I've encountered two broad conceptions of language in the philosophical literature. The first is the roughly Wittgensteinian or pragmatist view: the limits of one's language are the limits of one's world. The second is the roughly Chomskyan or Horwichian view: language is a natural object that can be studied in a scientific way to produce laws that describe its behavior. I'm not sure if these two views are incompatible, but there seems to be at least some tension between them. The pragmatist sees language as fundamental with no way to get a bird's eye view on it. The Chomskyan view sees language as an object or organism that can be studied in a detached way the same as, say, a proton or a duck. It might be revealing to see how these two views interact, viewing the limits of one's world as the object of systematic empirical study.

Thursday, May 04, 2006

An unbalanced diet - one of my favorite insights by Wittgenstein

In the Philosophical Investigations, Wittgenstein says something in section 593 that I think is underappreciated. He says:
"A main cause of philosophical disease -- an unbalanced diet: one nourishes one's thinking with only one kind of example."
I'm not completely sure why this is underappreciated, but I have an idea. Philosophy is supposed to be a very general study. Some might say that it attempts to study things in the most general ways possible. Since we are looking at general phenomena, a solution to one particular problem should apply to others in the same area, e.g. propositional attitudes. As John Perry pointed out, there are a lot of propositional attitudes, belief, doubt, thought, appreciate, hate, etc., but philosophers generally focus on belief and thought. Another reason that this is underappreciated is that philosophers want to talk to each other, not past each other, so to help the direct discourse along, so the thinking goes, talking about the same examples will let us talk to each other more easily. This has the unfortunate side effect that only a small set of examples or cases are looked at, e.g. the sentence "It is raining". In the philosophy of math, philosohpers generally focus on arithmetic, leaving the rest of math blowing in the wind, so to speak. The idea goes, I imagine, that you shouldn't have to check each case. Come up with a solution to one and you thereby come up with a solution to all of them. The problem with this idea should be obvious. If you appeal to some particularities of the case you look at, your solution won't work for the other problems. You miss generality without even realizing it. I'm not sure if that was something Wittgenstein was getting at with his ardent desire to look at particular cases, but I think it is a good lesson nonetheless.

Wednesday, May 03, 2006

A couple of conceptions of the lexicon

There are several conceptions of the lexicon. The two I will be concerned with here are the generative lexicon (GL) and the sense-enumeration lexicon (SEL). SELs are like dictionaries in that they list each individual sense of a word. Thus, 'bank' has an entry for a building, an institution, and so on for any other senses. These senses are usually distinct entries. This leads to a lot of entries, but each entry doesn't need much structure since it isn't storing much informaton. GLs posit fewer entries but with richer structure. They posit entries together with rules and mechanisms for generating other senses. This is to cover the building/institution senses we see in the case of 'bank'. Polysemy seems to be one of the biggest (the biggest?) consideration for GLs. SELs account for polysemy by including more entries. This looks bad since polysemy is systematic, so there are lots of entries for lots of words that are systematically similar. GLs are supposed to be better than SELs because they capture the overarching generality and uniformity that SELs miss. The different entries for 'bank' miss the uniformity given by the overall meaning of 'bank'. The main arguments for GLs over SELs that I've seen say that: there is systematic polysemy and that language is generative; SELs can account for the former by a proliferation of entries; accounting for the latter would require an infinite number of entries, by the meaning of 'generative'; SELs are by definition finite; therefore SELs cannot account for generativity; additionally, SELs cannot capture the overarching uniformity in polysemic words. The responses that I've seen to this seem to say: SELs can capture polysemy through more entries because the systematicity and uniformity aren't that big a deal; generativity can be captured through enough entries because there will only be a finite number of generated senses. As far as I can tell, there aren't particularly good arguments on either side although the pro-GL arguments look better.