Saturday, June 23, 2007

Rearticulating reasons: objectivity

This post is my stab at understanding the last section of the last chapter of Articulating Reasons.

In the last section of chapter 6 of Articulating Reasons, Brandom tackles the problem of making room for a notion of objectivity. This is important because he is trying to give an assertibility-conditional semantics. Truth-conditional semantics have a clear route to objectivity since what is true is independent of any agent's attitudes in a reasonable sense. The primitives for the inferentialist are going to be commitment and entitlement, so the question is how to come up with, in Brandom's phrase, an attitude transcendent notion of objectivity. This is one of the more interesting and important parts of the book, but it is also one of the more poorly written parts. Alas!

To make room for objectivity, there needs to be some way to cash out the difference in truth-conditions, without invoking truth, between two sorts of sentences: a claim and its meta-claims, where meta-claims involve the attitude or ascription of an agent. For example, "I am in Pittsburgh" is a claim and some of its meta-claims include "I am committed to the claim that I am in Pittsburgh" and "I assert that I am in Pittsburgh". There is a difference in truth-conditions between claims and their meta-claims. (The claim/meta-claim terminology is mine. It should be useful for explaining what is going on.) The primitives of evaluation for the assertibilist are entitlement and commitment. There are instances where one can be committed and entitled to both a claim and a meta-claim, e.g., to use Brandom's example: (a) "I will write a book" and (b) "I foresee I will write a book". Commitment to both can be obtained by, possibly, resolute avowals of your plans to write. Entitlement to both will be secured by roughly the same assertions for both. What can be said in defense of the former can be said in defense of the latter. The point isn't that commitment and entitlement to both necessarily go together, just that they possibly can. It is possible that the assertibilist cannot distinguish them solely in terms of commitment and entitlement alone.

Commitment alone and entitlement alone cannot account for the difference. However, incompatibility, which was introduced earlier in chapter 6 and discussed briefly a few posts ago, can. To recap, two claims are incompatible just in case commitment to one precludes entitlement to the other. The interaction of the two primitive notions give us the derived notion which will be used here. This is sort of the philosophy analog to Chekhov's gun principle: a concept introduced in an earlier argument will be used by the end of the book in an argument. Here is where it happens. The incompatibilities associated with (a) differ from those associated with (b), i.e. Inc(a) != Inc(b). To take Brandom's example again, "I will die in 10 minutes" is incompatible with (a), but not with (b), taking "foresee" in a non-omniscient, slightly weak way.

This gives the assertibilist a way to insert a gap between a claim and its meta-claims, to reflect the gap that the proponent truth-conditional claims is there. This should show that there is space in the inferentialist picture for objectivity, in the sense of attitude-transcendence. While the assertibility conditions on claims and meta-claims might be the same, they can be distinguished in terms of incompatibilities. This doesn't go the extra mile toward an account of objectivity. If memory serves, there is a stab at that near the end of Making It Explicit. This move just shows there is room for the notion objectivity.

Suppose one asks if there is a difference between the incompatibilities of a proposition p and "it is truth that p". It seems straightforward that anything incompatible with the former is incompatible with the latter and vice versa. Assume that q is incompatible with p. Then commitment to q then precludes entitlement to p. But, what entitles one to p would entitle one to "it is true that p". So q is incompatible with "it is true that p". Similarly for the converse. Incompatibilities do not distinguish between p and "it is true that p", which seems desirable. Brandom thinks he can exploit the notion of incompatibility to define a predicate "it is assertible that ..." which is disquotational in the same way the truth predicate is. In his jargon, this would be a predicate such that "it is assertible that p" has the same incompatibilities as p. This would be the same incompatibilities as "it is true that p". This would allow the assertibility-condition proponent to say that everything that the truth predicate does can be simulated by a predicate defined out of only her primitive notions. I do not know what I think of this last move, defining this assertibility predicate. I'm much less comfortable with it than other parts of the argument. It doesn't seem to get all the way to an account of objectivity, and it doesn't provide much beyond the demonstration that there is room for objectivity in the Brandom picture. "It is assertible that..." also seems to introduce another modality which is lacking in the truth predicate (as pointed out to me by another Pitt grad student). Incompatibility is a modal notion, but, outside of incompatibilities, the two predicates, truth and assertibility, differ as to modality, which is problematic for claiming an equivalence.

Friday, June 22, 2007

Reasons articulated, on to minding the world

The summer rolls on. The first six week summer session at Pitt has ended, and I can now read German slowly. The reading group I'm in finished discussing Brandom's Articulating Reasons. We are moving on to McDowell's Mind and World starting next week. Various people and things have kept me busy lately, so I haven't been posting, but I hope to write up a few more posts on Articulating Reasons in the near future. There is more to be said on chapter 6, and I would like to go back and write up some more things on chapters 3-5, since I've so far only posted on chapters 1,2, and 6. The best case scenario is that I will write those posts. Most likely I'll have things to say about Mind and World and, since I never seem to have time enough for my projects, the posts on Articulating Reasons will get put on the backburner, right behind my posts on Prawitz.

Rearticulating Reasons: a note on the game of reasons

In Chapter 6 of Articulating Reasons, Brandom makes some distinctions among the proprieties of assertions in his sketch of how to make a semantics based on assertibility conditions work. The two main ones are commitments and entitlements. Commitments are what claims one is committed to by an assertion. These give inferential relations among contents. Asserting "This is red" commits one to "This is colored", that is, the latter should be added to one's assertion score if the former is. Entitlements are the subset of commitments to which one can supply justifying reasons. These two notions come together to define incompatibility, which is a relation between contents. A is incompatible with B iff commitment to A precludes entitlement to B. This gives us two basic notions and one derived from their interplay. Now there are a few things to note about these.

Commitment is generally asymmetric. "A commits one to B" would not seem to entail "B commits one to A" for many A and B. Incompatibility is a modal notion on, as far as I can tell, two counts. The first is that "precludes" seems to be modal. Commitment to A means that one cannot be entitled to B. There is, I suppose, a non-modal reading of "precludes": commitment to A means that one is not entitled to B. Even in that case, incompatibility is a modal notion because entitlement is. To be entitled to A is to be able to give reasons in support of A. Entitlement is, it would seem, a modal notion. Incompatibility is a relation between contents, and a modal one at that. Incompatibility seems like it should be a symmetric relation, but, the definition doesn't seem to guarantee this. Commitment to A may preclude entitlement to B, but commitment to B need not preclude entitlement to A. At least, a proof of symmetry would be needed and I don't have one. At a minimum, incompatibility should be symmetric for A and ~A, since '~' is supposed to express incompatibilities.

From here, Brandom defines three sorts of relations. (He describes them as three sorts of inferential relations, but one is not on inferences. This is primarily on p. 193-4 for the interested reader.) These relations are: commitive inferences, permissive inferences, and incompatibility entailment. Commitive inferences are those that preserve commitments. Similarly, permissive inferences preserve entitlement. One gripe here is that there is no explanation of what preservation is supposed to be. Most non-stuttering inferences (i.e. inferences not of the form A, therefore A) will lose commitments. Suppose we make the inference: that is vermillion, therefore that is red (to stick with the easy example). The conclusion has different commitments than the premiss, at the least, the conclusion does not commit me to the premiss. There's an idea there that one can grab ahold of, but we're not given a clear picture of it. Incompatibility entailment (I-entailment) is better defined, but it is surprisingly difficult to ferret out the definition. By "surprisingly" I mean that it requires any work at all; it should be clearer. (No philosophical gripe here, just a stylistic and pedagogical one.) Let's say that Inc(A) is the set of contents that are incompatible with A, i.e. {C: A is incompatible with C, for B in the appropriate language(pardon the hand-waving)}. A I-entails B iff Inc(B)\subseteq Inc(A). The example from the book is "The swatch is vermillion" I-entails "The swatch is red" because everything incompatible with the latter is incompatible with the former. Do we have to restrict ourselves to single premises and conclusions? I think, with reservation, that all three relations can be generalized to be between sets of contents, with no restriction on the set of premises and with the restriction that the set of conclusions be a singleton. For I-entailment, this seems to be straightforward. For a set of contents H, Inc(H) equals the union of Inc(B), for all B\in H. I'm hesitant about extending this straightforwardly to commitive and permissive inferences since I'm unclear about what the preservation in their description is.

Why are these three notions important? I will get back to the relations at a later time. The two notions of commitment and entitlement are important because of how they figure in Brandom's game of giving and asking for reasons. Brandom argues that there are two necessary conditions on any game to count as a reasons game for assertions. One necessary condition is that the moves be evaluable in terms of commitments. The short reason for this is that assertions must express conceptual content. Such content is inferentially expressed, so assertions must fit into inferential networks. Making a move must commit you to inferentially articulated consequences in order for it to count as an assertion. The other necessary condition is that there be a distinguished class of assertions to which one is entitled. The short argument for this is that undertaking commitments implicitly acknowledges that justification for the commitment can be requested, at least in some circumstances. In separating out the moves that are justified/justifiable from those that aren't, one is creating a distinguished class of entitled assertions. Therefore, commitment and entitlement structures are necessary conditions for a game to count as a game of giving and asking for reasons for assertions. No sufficiency claim is being made. The notion of incompatibility, which links the other two normative notions, is important for what Brandom has to say about objectivity, which I will post on later.

Thursday, June 14, 2007

Here's looking at Euclid

A few weeks ago the Formal Epistemology Workshop (FEW) )was held at CMU. I attended a few sessions. The few sessions I attended were pretty good. In addition to a few good talks, I got to meet Kenny, who also commented on one of the talks I liked. Writing about the talks, even though kind of belated, is a great way to procrastinate on this paper I have hanging around, so I figure I'll do that.

I went to four talks. One on diagrams in proofs, one by Kevin Kelly on his stuff, one incomprehensible one by Isaac Levi, and one neat one on philosophical issues in AI and information theory. I'm going to talk about the first one. It was by John Mumma of CMU and it was on his thesis work. He was trying to offer a way of reconstructing Euclid's proofs in the Elements that gave the diagrams an important role. The traditional story, since Hilbert, is that diagrams don't have any actual role in the proof. Proofs are sequences of propositions or text, so diagrams, being neither propositional nor textual, play no role in proof. At best, they gesture towards how to the missing steps in a proof. Mumma's idea was to come up with a way of using the diagrams in proofs. He distinguished between two sorts of diagrammatic properties, whose names I cannot remember. The rough idea was that one was metric (how long) and the other topological (what is between/inside what). Based on this distinction, he noted that Euclid only invoked the latter when his diagrams played a role in the proofs. The proofs contain instructions for constructing the diagrams, and, when the constructions are done out of order the diagrams can lead us astray. He gave some examples and took the moral to be that the order of the constructions are important, that they create dependencies among the parts and properties of the diagrams. From my point of view, the neat thing was what he did next. In order to talk about proofs that actually use diagrams in a central way, he defined a proof system that included syntactic diagram objects. These diagrams were (roughly) n x n grids with lines between various points (I think they ended up being 4-tuples encoding this information). Akin to inference rules, diagrams get operations, or constructions, on them. (He might have ended up calling them inference rules in the end.) The proof system is a sequent calculus in which each sequent A, B-> A',B' consists of (sets of) diagrams A,A', and (sets of) formulas B,B'. This is all a bit rough since my memory of these details is hazy after the few weeks that have passed. Mumma had some nice result about this proof system being able to nicely reconstruct all of the proofs from the first four or so books of the Elements. Well done! It looks like a neat project.

I said the new proof system was the interesting thing from my point of view. I will explain. The traditional picture says that proofs are sequences of propositions or text related in a suitable way. Mumma's move was to expand the idea of proof theoretic language to include diagrammatic objects and operations on them. No reason to be overly strict with the notion of language for proofs. He did not go all the way to using full diagrams, it seems. The diagrammatic objects are encoded in 4-tuples (with some further abstracting into equivalence classes, I think). This means that some things are getting left out, but things must be left out to get the right level of generality needed for proofs I suppose. If Mumma's idea pans out (must read papers...), it seems like it could have an application to inferentialist semantics of the Brandomian sort. Certain sorts of concepts, e.g. geometrical ones, seem to have natural inferential roles bound up with diagrams or pictures. It would be nice to be able to take advantage of that. But, this is speculation. Something else I'm wondering is to what degree Mumma's work can hook up to Etchemendy and Barwise's project of investing hybrid forms of information, which are forms of information that depend on narrowly linguistic forms of information as well as traditionally non-propositional forms of information, such as maps and diagrams. Maybe some day I will get to come back to this.

Wednesday, June 06, 2007

Precision and usefulness

Over at Logic Matters, Peter Smith has been doing a series of posts on a collection of articles on the Church-Turing thesis. The series has been informative and entertaining even though the collectoin seems to be pretty weak. Maybe it is partly because the collection is weak that the series has been good. Seeing what goes wrong in the articles is somewhat illuminating. Anyway. The newest post, on two articles, has a fantastic line:
"Horsten’s own discussion seems thoroughly inconclusive too. So I fear that this looks to be an exercise in pretend precision where nothing very useful is going on."
The post, series, and blog are all worth reading. [Addition: Peter Smith is the kind soul who put together the LaTeX for Logicians website.] [edit: I changed the title because I realized it didn't make sense in this context]

Tuesday, June 05, 2007

Overly strong statement of the day

I just realized how much the back of Articulating Reasons overshoots its actual content. I quote the later half of the final sentence: "Articulating Reasons puts Brandom's work within reach of nonphilsoophers who want to understand the state of the foundations of semantics." I feel like this is accurate only if the set of such nonphilosophers is empty.

Sunday, June 03, 2007

Rearticulating reasons: conservative and normative

In the second chapter of Articulating Reasons, Brandom makes a connection between logical and normative vocabulary. He says: "[N]omative vocabulary (including expressions of preference) makes explicit the endorsement (attributed or acknowledged) of material proprieties of practical reasoning. Normative vocabulary plays the same expressive role on the practical side that conditionals do on the theoretical side."
In the first chapter he says that for logical vocabulary to plays its expressive role, the introduction of logical vocabulary must be conservative over old inferences. The question is: does normative vocabulary have to satisfy the same criterion to play its expressive role? I do not think that the distinction between theoretical and practical reasoning is going to influence this. The expressive role is to make explicit commitments and endorsements. A case is made that in order to do this, the explicitating vocabulary (what an awful phrase) must not add additional conceptual material to the mix, that is, not make good new conclusions involving only old vocabulary. This is to be conservative over the old inferences though. This means that normative vocabulary is not conceptually contentful, even expressions of preference. This is mildly surprising. My next question is what makes normative vocabulary normative if not conceptual content, since it has none. Is it just a sui generis property of that sort of vocabulary? We don't get any indication about what makes some bit of vocabulary normative though, so I am hesitant to endorse that sort of strategy. I'm not sure what to make of this. Is logical vocabulary similarly normative? This might be where the theoretical/practical divide comes in if the answer is no. One might say the practical dimension somehow confers normativity on the vocabulary. Although, since the inferentialist project in AR and Making It Explicit sees beliefs as normative (or what is discussed instead of belief), it is not unlikely that logical vocabulary has a normative element to it. This element is not conceptual, so I'm unsure what it would be. To back up a little, the question of whether the addition of normative vocabulary is always conservative is an interesting one. The idea is that normative vocabulary is supposed to codify certain practical inferences that one makes, e.g. I am banker going to work so I shall wear a tie. The normative vocabulary codifies this inference to fill the role of the pro-attitude or missing premise in an enthymeme. So, would adding, say, 'should' allow inferences to conclusions not containing 'should' that were not previously allowed? Intuitively, no, although there isn't really anything more than that given in the book. I'm not sure how one would argue this point generally though. While one might let 'should' slip by as conservative, some of the other normative vocabulary is less compelling, e.g. expressions of preference.

Ironically interpreted

It seems like a fun little exercise to think about what Davidsonian radical interpretation would have to say about this guy.

More sappy links about research

Continuing my endeavor to link to advice by non-philosophers about non-philosophy...
Last night I stumbled across slides for a talk called "How to Have a Bad Career In Research/Academia". (The link is to the page with the talk, not to the talk itself.) That was, by and large, pretty disappointing. It mentioned a 1986 talk at Bell Labs which sounded kind of neat. I googled that and found "You and Your Research" by Richard Hamming. This talk was pretty good. Of course, it wasn't by a philosopher or about philosophy. It was by a mathematician about computer science, mostly. Still, there was a decent amount of good advice, although some of it is hard to apply to philosophy. But, there were large bits that seemed applicable and which I enjoyed. It also resonated, in a strange way, with Dennett's short piece on chmess. The main points in the talk were that in order to do great work one had to: work hard. identify the important problems in an area, work on those problems, and be flexible. For people that don't want to read the whole thing, I'll quote my favorite bit:
"Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, ``Do you mind if I join you?'' They can't say no, so I started eating with them for a while. And I started asking, ``What are the important problems of your field?'' And after a week or so, ``What important problems are you working on?'' And after some more time I came in one day and said, ``If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?'' I wasn't welcomed after that; I had to find somebody else to eat with! That was in the spring."
What on earth are the important problems in the different areas of philosophy...

Friday, June 01, 2007

Rearticulating Reasons: conservative and harmonious

I'm doing a reading group this summer with some of the other Pitt grads on Brandom's Articulating Reasons and (later) McDowell's Mind and World. I finally got my copy of Articulating Reasons (AR) back, so I figure I will do a few posts on issues we've covered in discussion. The first is going to be Brandom's discussion of harmony in the final sections of chapter 1. For some good background on harmony, check out the presentation slides that Ole put up. They are quite helpful.

On the inferentialist picture in AR, the meaning of a word or concept is given by the inferences in which it figures. Brandom discusses Prior's tonk objection to this idea for the case of logical constants, and then he goes on to state Belnap's reply that logical constants should be conservative. This will not work for placing restrictions on the introduction of new vocabulary into a language for an inferentialist (understanding the inferentialist project as trying to give the meanings for the whole language, not just the logical constants) since conservativeness is too strong. To be conservative is not to license any inferences to conclusions that are free of the new vocabulary. This however is to not add any new conceptual content, i.e. any new material inferences. Adding new conceptual content would mean licensing new material inferences which would interact with the old vocabulary and conceptual content to get new conclusions. Brandom sums this up by saying, "the expressive account of what distinguishes logical vocabulary shows us a deep reason for this demand; it is needed not only to avoid horrible consequences but also because otherwise logical vocabulary cannot perform its expressive function." It take this to mean that logical vocabulary can make something explicit because it is not adding any content to the mix, not muddying the conceptual waters so to speak (or even to throw another metaphor in, not creating the danger of crossing the beams of the conceptual). I will say more about this in another post.

Brandom proceeds to the next obvious constraint on inferences, Dummett's notion of harmony, which is supposed to generalize Belnap's conservativeness. Brandom doesn't say much to shed light on what precisely harmony is. The idea, presented roughly, is to find some sort of nice relationship between the introduction rules (circumstances of application; left rules) and the elimination rules (consequences of application; right rules). Brandom reads Dummett as hoping there will be a tight connection between the consequences of a statement and the criteria of its truth (p. 74 in AR, p. 358 in Dummett's Frege: Philosophy of Language). Dummett also says that conservativeness is a necessary but not sufficient condition on harmony. Brandom thinks there is reason to doubt the latter, namely that new content can be added harmoniously to an existing set of material inferences, which conservativeness does not allow. Following Wittgenstein, Brandom thinks there is reason to doubt the former. I won't go into that though. He sees Dummett as wanting a theory of meaning to provide an account of harmony between circumstances and consequences of application.
Brandom says, "that presupposes that the circumstances and consequences of application of the concept of harmony do not themselves stand in a substantive material inferential relation." Instead, Brandom thinks the idea of harmony only makes sense as the process of harmonizing, repairing, and elucidating concepts. He seems to take it as requiring an investigation into what sorts of inferences ought to be endorsed, normative rather than descriptive. This process, on the Brandomian picture, is done by making inferential connections explicit through the codification of these in conditionals and then criticizing and defending these commitments. Sounds a wee bit hegelian (I say like I know anything about Hegel...).

Brandom's idea here seems to be that conservativeness works for characterizing or constraining what counts as a logical constant. I have some qualms about this since conservativeness will be relative to the initial vocabulary and set of material inferences. Harmony will not serve as a constraint on the wider field of all material inferences since the concept of harmony itself must stand in inferential relations, so it is subject to change over time, revised in light of other commitments. If this is so, I don't see why that consideration wouldn't also apply to conservativeness, unless that somehow can be added conservatively to all languages. I'm skeptical that that is the case. If that is right, I'm not sure how Brandom can maintain that harmony cannot function as a semantic constraint while conservativeness can. It seems like he wants to have his cake and eat it too.