Thursday, August 31, 2006

Classes

This semester it looks like I will be taking the metaphysics and epistemology core seminar, proof theory, a seminar on naturalism in philosophy, and a seminar on Kant's first critique. I expect that many of my future posts will be on these.

Retrogradation

In at least two places (I believe there are more) in Frege: Philosophy of Language, Dummett says that Frege made a retrograde move. The first of these was in assimilating sentences to the class of complex terms. Sentences refer to objects, just like other terms, in Frege's later philosohy; they refer to the truth and the false. This is opposed to their function in his early philosophy which is to have truth-conditions. What's the difference here? Well, the prior is a matter of reference and the latter is a matter of meaning. That's one big thing. The other is that on the complex term account, sentences lose their distinctive status. They are special in that they are what let us make moves in language games (what about the builders in the Philosophical Investigations?). They are things that we can take responsibility for, be committed to. Terms don't satisfy these functions at all.

The other retrograde move Frege made was characterizing logic as studying truth rather than inference. I'm not sure if the tradition prior to Frege, of which Kant said was a completed science, focused on inference as in consequence. But, I am not familiar with the pre-Fregean tradition much at all. He blames this focus on truth as logic's object of study for the theoretical eddy that was the analytic/synthetic discussion of the logical positivists. Focusing on truth led the positivists to distinguish two kinds of truths, namely the analytic and the synthetic, which could explain why logic was useful and had truths despite having no empirical content. I think this is an interesting observation on Dummett's part. I'd like to check out the pre-Fregean tradition in logic and see how much it focused on inference or consequence over truth (massive display of ignorance here) and then reconsider his point.

Feyerabend and the web of belief

I read an article by John Dupre entitled "The Miracle of Monism" which was about how monism, unity of science, and naturalism are related. In it he cites some reasons that the traditional Popperian falsifiability criterion do not work, which (I think) he attributes to Feyerabend. I was suprised (pleasantly so) to see that a lot of the reasons listed coincided closely with my post on different ways to revise beliefs and Quine's web of belief. This makes me want to check out Feyerabend.

Tuesday, August 29, 2006

Philosophy of language: back to basics

One problem I find myself mulling over a lot is what is the basic meaning bearers or what should be the main focus of attention in philosophy of languge. That isn't quite the best way to phrase it. I think it will get clearer with some examples.

It seems like there are three main candidates for what is the basic bearer of meaning which are related: sentences as types, utterances, and intentions, communicative or otherwise. Sentences are basic in the Kaplan tradition of semantics. Utterances are basic in the Perry tradition of pragmatics. Intentions are basic in the neo-Gricean tradition of pragmatics. Looking at this, one might be inclined to say that sentences serve for semantics and attempt to separate out the remaining candidates for pragmatics. This strikes me as misguided but I'm not sure why at the moment.

There is a way in which all three are related. Sentences are types which are token in utterances. A near equivalence between utterances and sentences can be had by indexing sentences to agents, times, and locations (and worlds maybe) such that that sentence is uttered by the agent at the time at the location in the world. Fair enough. There is some underdetermination since there are many ways to utter a sentence, quickly, slowly, with a drawal, with an English accent, etc. Indexing won't fix that unless it includes a wave form index, but that is just including an utterance type in the index. Intentions can be coupled with utterances since utterances are made by agents with certain intentions, as utterances are actions which are intentional. What is the connection between sentences and intentions? I'm not sure if there is a direct link between the two.

What recommends one over the others? Sentences have the benefit of easily being incorporated into a logic or formal semantics, a la Kaplan and Montague. This is Kaplan's reason for using them, as, to use his great phrase, semantics depends on the "verities of meaning, not the vagaries of action." Why use utterances then? To facilitate pragmatic theory. Perry opts for utterances over intentions and sentences since utterances are physical events with times and locations. This allows them to be carriers of information, a fact he exploits in his more general theory of information and situation semantics. Utterancess are physical events which can carry information about the world given constraints to which interpreters are attuned and beliefs they have. This is used in his project of providing a naturalized basis for information. Intentions are basic if one is inclined to follow the Gricean line, like Sperber, Wilson, and Neale. The Gricean line is that saying or meaning is a form of intention recognition. Meaning is conveyed just in case the speaker's communicatve intention is picked up on by the interpreter. What do utterances and sentences do on this picture? Utterances provide a nuanced way for interpreters to get at the communicative intention; sentences are just the types involved in said utterances, I guess. This can be incorporated into a sophisticated theory of pragmatics, like Grice's own, which is a point in its favor although not clearly better than utterances as many philosophers and linguists working in pragmatics take utterances as basic.

Clearly, taking one as basic precludes taking the other two as basic. At this point I just wanted to lay out something I see as foundational issue in philosophy of language that I don't know of a solid answer to. There are a couple of other options that I didn't include in my discussion that might be worth including in the future: propositions and inference. As far as meaning bearers go, inference is an option and propositions are not. Propositions are candidates for meanings, not bearers of them I think. The question as to which of sentences, utterances, and intentions are basic can still arise for inference. Additionally, I see inference as competing mainly with reference for foundational status, so I didn't include it here.

Monday, August 28, 2006

Back...

I was on a brief hiatus while I moved and set up my new place. Classes start soon which means I will get back into the swing of philosophy (some might say the very slow swing) which will result in more posts soon.

Tuesday, August 15, 2006

Logic, logic everywhere...

I like logic as much as the next guy, but sometimes philosophers and logicians do things that strike me as ridiculous. Sometimes they act as if there was a need to create logics for everything, as if nothing was understood unless we have a formal calculus for it. This is not a complaint against all formal systems. The various forms of counterfactual logics are interesting and worthwhile, although how much light they shed on problems of modality I cannot say for sure. The main offender that springs to mind is from a group of papers, which might be growing, on the logic of fiction. I'm doubtful of its efficacy for two reasons. One, I don't think there is enough of a concensus on how to understand fiction for there to be any sort of enlightening logic thereof. The relatoin of fictional discourse to non-fictional discourse is still a bit shaky. Two, I am doubtful that a logic will shed any sort of light on issues that people interested in literature will care about. The best case scenario that I foresee as a live possibility is that somehow computer scientists or someone working on contradictory independent discourses will use the stuff for some completely unrelataed project. There are more offenders. The best summary of this thought that I have heard was from a linguist, Dan Jurafsky. He said that in graduate school, computer science, they would spend their time coming up with new formal systems and that it ended up being the biggest intellectual morass he had seen.

This may not have been fair to the logic of fiction people, but there are a couple of necessary conditions for creating formal systems that I think were not met. One is a degree of conceptual clarity and another is a clear motivation. I'm not feeling either of them.

Monday, August 14, 2006

Methodology

It is often said that there is no one philosophical methodology. Kant said in the first critique that philosophers were foolish to treat their arguments as if they had mathematical rigor. I'm not sure if this has the same force now that it did then since the methods of mathematics and philosophy have changed somewhat. Both have become a bit more rigorous, although I am not familiar with the gritty details of pre-19th century mathematics.

Quine and Davidson argued that there are three dogmas of empiricism: the analytic-synthetic distinction, verificationism, and the scheme-content distinction. For a time, it was considered a knock down criticism of an idea to show that it presupposed the analytic-synthetic distinction. I'm not sure that it was ever that damning to show that something was verificationist. While in some areas, such as semantics or confirmation theory, verificiationism is untenable, there are some areas that seem to get along pretty well with it, i.e. intuitionistic logic and its ilk. Are there areas in which the scheme-content distinction is employed in this way? Davidson used it against Quine, but I think Quine changed his stance on some things afterwards. McDowell uses it some in Mind and World. Would it be a viable program to investigate what doctrines either presuppose one (or more) of the dogmas or entail one (or more) of them? I am thinking of something analogous to recursion theory. In recursion theory, there is something of a method to showing that some particular problem is not computable: show that a solution to that problem would yield a solution to the halting problem. Another, related method is to show that there is a diagonalizaiton argument applicable to the given problem which shows it to be uncomputable. While showing that a given doctrine presuppoes or entails a dogma wouldn't be as final a conclusion as the results in recursion theory, I think it would be illuminating. It also depends on how persuasive one thinks the arguments against the dogmas are. One example of a doctrine about which I wonder if it presupposes a dogma is situation semantics and the scheme-content distinction. There is some talk in situation semantics about categorizing a metaphysical heap (so to speak) with concepts, which sounds like scheme and content.

Another idea is to see what theses are equivalent to the dogmas, in roughly the same vein as one shows equivalences between various forms of the axiom of choice in set theory. For example, Quine argued that the analytic-synthetic distinction was equivalent to Carnap's language-internal/-external problem distinction. Some things Davidson said made it sound like the scheme-content distinction is equivalent to the myth of the given. Again, this might prove illuminating.

One problem with this idea is that if one does not have some allegiance to empiricism, it might not hold much water as criticism. For example, rationalists were probably not terribly moved by Quine's arguments against the analytic-synthetic distinction. At least, Sellars holds one version of the distinction and Brandom uses it without pause. Even so, I think there's something worth looking into with this idea.

Thursday, August 10, 2006

Action and meaning

(Just a disclaimer, this is a rough post.) One of the biggest contributions that Berkeley made to philosophy (according to John Perry, and I tend to agree) is that he emphasized the importance of the connection between experiences and action. There needs to be a tight connection between one's perceptions and beliefs in order to explain actoins, intentional or otherwise. Some of the things that play very large roles in the formation of new beliefs and intentions for action are the meanings of words. Suppose that meaning is given by reference relations between words and (other) things a la 'cat' means cats. How could meaning explain action? The meanings would have to have an interface with the agent's mind, but this is lacking since the objects are for the most part non-mental. You get objectivity for free, but you need to bridge the gap for explaining action. I suppose one way to do this is with mental representations, but this seems like it would lead to other problems like regresses or skepticism. Maybe a better response is to say that in grasping a meaning, the agent takes the meaning and puts it into a form that is amenable to mental interactions. She forms representations? (I'm sure there is a literature on this, but I haven't read it.) Next we have to make sure that the representations accurately mirror nature.

To approach this from another direction, if meanings are inferential connections, then it seems like we can explain the links with actions quite easily. Meanings are mental entities of a sort; they are connections among beliefs. This gives us the connection to intentions which makes it easy to see how meaning of this sort could figure into an account of action without needing to invoke representations of external relations. This gives us the action-intention link, but it presents a pickle of a problem: objectivity seems hard to get. I'll need to go back through Brandom's arguments for objectivity in Making It Explicit and write a post or two about it.

Tuesday, August 08, 2006

Primitive truth

I just noticed the option of displaying a title field when posting. I'm going to try to have titles from now on.

One of the things that I find both interesting and frustrating about Davidson's work is how he treats the concept of truth as primitive. He takes it for granted in a great many things, saying that without a concept of truth one cannot have these other concepts. There is something right in that I think. For example, linking a notion of truth with a notion of objectivity, like he does, seems like the right order of explanation. I'm not sure you can get all the mileage out of truth that he tries to, but it is a good thing to keep primitive.

There are two things about it that I find frustrating. First, he never explains how we are supposed to learn the concept of truth. He could make it practically ineffable by saying that it is an innate concept that our we cannot do without rationally or biologically. This would be a bit extreme. It is really hard to see how one would teach it to a small child since it is pretty abstract, but small children do get a grasp on it. McDowell brought up a similar point with the concept of rationality. How does one learn that concept? They require a sort of meta position to observe them in behavior, and to see that you already have to know what to look for. They can be demonstrated in practice, to a point, but there are a whole lot of theoretical concepts that would equally well apply to these patterns. This leaves things somewhat mysterious.

The other reason I'm frustrated with the primitive concept of truth is one of the reasons given for saying that truth is undefinable or must be primitive. Davidson cites Tarski's theorem that truth for a language is undefinable within the language on pain of inconsistency. Tarski's result is perfectly clear. What isn't clear is by what right Davidson imports that result from FOL into English. English certainly contains its own semantical words. Even if we regiment English into a theoretical fragment for use in our truth theories, we still haven't shown any sort of translation or correspondence between it and FOL that would allow for theorems in one to be used in the other. Maybe Davidson is idealizing away a lot of things in English such that he is left with a pseudo-formal calculus in the spirit of FOL. This would be a big idealization if it were made, and it would still be in as much need of justification as the current use of Tarski's result in plain English.

Monday, August 07, 2006

Syntactic theory

When I took syntactic theory, Ivan Sag said something that seemed mysterious at the time but now, after working through a book on formal language theory, makes a great deal of sense. He said that older syntactic theories were working with mathematics from the 40s and 50s and were basically sophisticated string rewrite systems. What did he mean? The transformational grammars were based on context-free grammars of the variety inspired by Emil Post's work. These enable one to take a string and apply a series of generation rules that change the string. When combined with transformation rules, one can change the order of substrings in a stirng, including adding in null elements. There is no deeper structure to it that this however.

Friday, August 04, 2006

Conversational impliciture

Kent Bach has this idea he calls "conversational impliciture." It comes out of his reading of Grice. He says that Grice posits a constraint on what is said that each part of what is said should correspond (be expressed by?) a syntactic element of the sentence. What does that get us? If you have a sentence that looks "semantically incomplete" then without the constraint you would be tempted to say that the missing semantic elements are there, just unexpressed or unarticulated. Bach says these things aren't part of what is said since they don't have a corresponding syntactic realization. They are implicit in the conversation though. Rather than enriching what is said, he suggests a new layer, conversational impliciture, which is the material that is implicit in a given utterance. This meshes with some of his recent stuff because he does not think that utterances of full sentences always express complete thoughts or propositions. He is all for accepting semantic incompleteness.

Here are three quick, related questions I have about his view. First, how much different is it than, say, the unarticulated constituent view of Perry? Both have implicit, assumed information making its way into propositions. (I'm pretty sure conversational impliciture is propositional.) The main difference seems to be that Perry puts it in what is said (or the locutionary content, which, I believe, is his preferred term these days) while Bach leaves what is said semantically incomplete and puts it somewhere else. Does this difference end up coming to much though?

Second, how does impliciture interact with implicature? Is it used in the derivation of implicatures? Grice says that what is said is used to determine what the implicatures are, in combination with the maxims. I suppose it becomes a part of the background knowledge, but it makes what is said seem otiose. If we have an implicature whose derivation requires using the impliciture, which is an enriched version of what is said, then what is said is doing exactly 0 work in the derivation. It only serves to get us to impliciture, then drops out. It does not seem difficult to construct cases in which this would happen, say an implicature depending on someone not having eaten today when the person says "I have not eaten." This example needs a lot of fleshing out before it becomes convincing, but hopefully one can see where it is heading. This leads to the third question.

What exactly is the difference between implicitures and implicatures? They seem to require very similar mechanisms in their determination. Why not say that impliciture is an intermediate step in the derivation (if I may use these terms like the derivations were well-defined) of implicature from what is said? There doesn't seem to be any good demarcation line between the two concepts. Of course, an unclear boundary doesn't mean that the concepts are worthless (Quine didn't win that fight in all cases) but it would be good to clarify.

Tuesday, August 01, 2006

Sellars on meaning and inference

In his "Meaning and Inference" Sellars makes a distinction between material inferences and logical inferences. Logical inferences are classically valid and proceed from 'p->q,p' to 'q', while material inferences go from 'p' to 'q' based on their material contents. This distinction is also made in Carnap's Logical Syntax of Language as L-inferences and P-inferences respectively. The paper goes through a detailed discussion and criticism of Carnap's work, which more and more seems to be something I wll have to go through. I will mention a couple of points that Selalrs makes that interested me.

First he insisted on the rule part of 'rule of inference'. He focused on the normative force that rules have and how they are involved in action. Reformulating a rule in such a way that it does not indicate an action is to eliminate the rule in favor of a description of the circumstances in which the rule can be applied. This seems basically right. His example is: X is arrestable =_{def} X broke a law. The left-hand side features a permissable action while the right-hand side features only a description of a state of affairs. He discusses logical necessity as being embodied in rules of inference. Something is logically necessary only if it conforms to a certain inference pattern. This is where he ties together modality and normativity with the phrase that Brandom deployed so well: Modality is a transposed language of norms. Sellars makes a lot of connections between object language and meta-language phenomena. He also shows how counterfactuals within a language can be eliminated if one endorses certain rules of inference in the metalanguage. One of my favorite comments made by Sellars is about the confusion of regularity with following a rule. Both are learned behavior, but there are many differences between the two and ignoring these differences has led to a lot of nonsense about ostensive definition, in roughly his words.

Second, one of the surprising things was how much importance was placed on counterfactual reasoning. The counterfactual inferences that one endorses show what content is assigned to each of the descriptive terms in the inferences. The fact that counterfactual reasoning is not classical and that the subjunctive conditional is not a material implication is seen as a feature and not a bug, pace Quine.

Third, there was a brief argument at the end about how modal and normative vocabulary does not assert psychological facts although it conveys them (I think that is how he put it.). This results in the claim that modal, normative, and psychological vocabulary are not reducible to each other. I did not follow this argument, but it was surprising to see so much of "Between Saying and Doing" show up in some guise in this essay.

Fourth, Sellars seems to launch sustained criticisms of empiricism in his work, particularly of the logical positivist kind. While he praises Carnap a great deal, I think he successfully showed that a functional natural language cannot do without P-inferences and intensional counterfactuals. Carnap thought that extensional L-inferences would do the job just fine. Granted, Carnap was making artificial calculi not natural languages, but he did think that the artificial languages were capable of being adopted by people as a natural language (according to Sellars).

Sellars on concepts and laws

Of late I've been getting into Wilfrid Sellars's work. I read his "Concepts as Involving Laws, and Inconceivable without Them" which Brandom makes so much of. It was quite an essay. One of his main points which I found refreshing was his pluralism about universals. I'm not completely sure how to formulate it, but it is roughly the idea that the set of all the universals we have instantiated or relevant to our world does not exhaust the set of all universals. This leads to several interesting ideas. Instead of saying that there are no substantive relations between universals, e.g. that they are reorganizable since they are logically independent, he says that there are substantive relations between universals. The laws that govern their instantiation are essential to them, as are the particulars that instantiate them, but these universals are essential to those laws and particulars as well. This means that although laws are logically independent from properties, changing the laws means changing the properties. This connects with his championing of material content of terms, but that is a digression. Universals that are instantiated or relevant to a world are grouped together to create a family of histories. Each history in a family involves the same set of universals, although different starting conditions will lead to different courses of history. What light do these shed on anything? The motivation for the paper is to explain what laws are if they are not analytic and they are not restricted to actual events and a related dilemma about what sort of implication is involved in laws. Laws for Sellars are conditionals that are true across all histories in a family. Accidental regularities are true in some but not all histories in a family. That is a pretty tidy formulation that falls out of his idea.

Another thing that is very interesting about Sellars is his focus on counterfactuals and conterfactual situations. He was writing around the same time as Quine, when extensionalism was in vogue. He does a good job of showing how taking intensional constructions seriously can lead to good results in philosophy. There are a lot of other differences between Sellars and Quine, but that one in particular comes out in this essay.