This was noted in the lecture for the class I'm TAing for. Apparently people are now using "argue" in a way that mirrors how most philosophers would use "argue against". For example, a youngster in Pittsburgh would say "no one can argue that I'm in Pittsburgh" and want to be interpreted as saying that no one could present an argument against the claim that he wasn't in Pittsburgh. I would say that "argue" is adopting the meaning of "refute", but another shift is the use of "refute" to mean deny. For example, Bill refuted my premise that the sky was blue by saying that the sky was not blue. Bill didn't present an argument demonstrating that my claim was wrong; he just asserted that it was. "I refute you thusly: not-p." I was sort of aware that the latter shift was occurring, but the former was completely off my radar.
Tuesday, January 22, 2008
I almost titled this post "In which I try to justify what I do." This term I'm doing a directed reading on algebraic logic, focusing on Dunn's book. This stuff is interesting, in part because I don't know much about algebra and this is providing some much needed background. One of the problems with this is figuring out how this applies to philosophy, or at least to philosophical logic. Here's a stab at it, albeit a somewhat sketchy stab. Read more
One idea I had, which Dunn goes into some, is to investigate the correspondence between algebraic conditions and structural rules in proof theory. If we treat '≤' as a relation of implication and '→' as an implication operation, then we can introduce a binary operation '•' which is a premiss grouping operation. It is the fusion of relevance logic. It can be used to relate the relational and operational forms of implication, e.g. like a•b≤c iff a≤b→c. There is likely more to say here about the connection between implication viewed relationally and viewed operationally.
It turns that on the structure (S, ≤, •, →, ←), where S is the domain, then conditions on • correspond to structural rules, which can lead to different proof systems. For example, if • only has associativity, a•(b•c)=(a•b)•c, then (S, ≤, •, →, ←) yields the Lambek calculus. Adding commutation, a•b=b•a, yields linear logic. There is a question of what is meant here by "corresponding". I think what Dunn means is that provably equivalent sentences are identified in the various algebras. He does mention what he calls "a subtlety implicit in the relationship of the algebraic systems to their parent logics" that comes out of the logics having connectives in them apart from the arrows. There seems to be something there.
Related to this, Dunn spells out some conditions on implication and negation that make very clear what extra conditions the stronger forms of these operations have. For example, the difference between intuitionistic and classical negation, when looking at them in terms of lattices, is that classical negation adopts a∨-a=1 (where '-' is the negation), while intuitionistic negation does not. There are further conditions that intuitionistic negation adopts that other negations don't. There are similar sorts of conditions on implication operations. I'm hoping that Restall's book will have some philosophical starting points regarding these things. At the moment I'm not sure where to go with this though.
Another idea was that, according to a more knowledgeable grad student, algebraic logic handles modality easily while it handles first-order quantification poorly. This was surprising since one can view normal propositional logics as restricted, or "guarded", quantification over a domain of worlds. I don't know the technical details of the algebraic approaches to either of these at the moment, so I can't say any more.
I am not that far in the book yet. I am getting through the important foundational material. There are a lot of technically interesting ideas and theorems in here. My goal is to come up with some philosophical mileage out of them. I think I've gotten some mileage out of the model theory stuff from last term (still need to post that...), but I'm not sure what this stuff will yield yet. Ideas are always welcome. I'm approaching the foundational chapters on syntax and semantics. They seem like they could lead to some ideas.
Monday, January 21, 2008
While I tend to like my philosophy to read more like Quine, Sellars has his high points. There is a certain appeal to the metaphor and grand sounding claims in his writing, e.g., the space of reasons stuff and the manifest image paper. Consider the last paragraph from his Empiricism and the Philosophy of Mind:
"I have used a myth to kill a myth -- the Myth of the Given. But is my myth really a myth? Or does the reader not recognize Jones as Man himself in the middle of his journey from the grunts and groans of the cave to the subtle and polydimensional discourse of the drawing room, the laboratory, and the study, the language of Henry and William James, of Einstein and of the philosophers who, in their efforts to break out of discourse to an arche beyond discourse, have provided the most curious dimension of all."
Can anyone still write like that?
Posted by Shawn at 8:42 AM
Sunday, January 13, 2008
Other people have thoughts on Brandom's third Woodbridge lecture too. I will get back to my comfort zone of topics this week, I hope. Read more
The third lecture was primarily on Hegelian ideas. In a way, it was the most interesting since it talked about conceptual change. In a way, it was the least interesting since I think it was the least cohesive and I didn't understand a few things in it. If I had a better since of Hegel or if Brandom had put a little more meat on the bones of his story I may have thought otherwise. One of the ways he explained the story of conceptual change was through common law. Judges recognize certain of past ruling as binding and use those to lay down rules for correctness of future decisions which must in turn be recognized as binding for future judges to be bound by their precedents. This is supposed to combine the synthesis of judgments in the first lecture with the reciprocal recognition stuff from the second chapter. This works as a story for things like common law and more human dependent concepts, such as justice. I'm not sure I see what is supposed to happen here with respect to concepts like atom and other more scientific concepts. Is the world supposed to be the other party? Are the scientists recognizing certain past experiments as providing refuting or confirming results in cobbling together theories which are taken as constitutive of new iterations of the concepts? Brandom was pressed on this point in the questions but I didn't follow what he said. He seems to think that this model works for concepts which are, to use a different vocabulary, more historical and those which have more of an essence.
The development of concepts has two parts or temporal perspectives. One is the retrospective, looking back at successful applications and constructing a story about how they were. The other is prospective, making judgments about what should count as correct novel applications of a concept. (As an aside, the notion of an application of a concept started to seem weird as the lectures progressed. The idea of an application of a function in, say, the lambda calculus has a determinant meaning. It is a little less clear what it is to apply a concept. It sounds like one is putting a stamp on something. The application of a concept sounds a lot like a doing of some sort, but at the moment it isn't clear what sort of doing it is.) This cashes out the content of a concept in terms of an activity. This is one of the key methodological ideas in MIE, i.e. pragmatism, or methodological pragmatism. What must one do in order to count as using an expression with a certain meaning. Brandom attributed this idea to both Kant and Hegel. I'm not sure how well it fits with either, but it is an idea that I like. Apparently Stalnaker has recently started writing about it too (Brandom alluded to Stalnaker's writing at the start of last term but didn't say what things in particular.)
The process of concept change seems to be driven by a concept that stays in the background, reason or rationality. (This point was brought out and pressed by one of the other grad students.) Brandom doesn't say much about this and he said that Hegel doesn't say much about it either. It somehow stays out of the conceptual flux. He got out of this one by claiming that Hegel wasn't trying to explain the concept of reason. Exegetically this move is probably fine, but if one wants to rehabilitate Hegelian ideas, it seems like something that needs to be tackled. Why does reason stay so stable while all the other concepts contain the seeds of their own destruction?
The other thing that I was puzzled by in this lecture was the notion of expressive progress. Brandom puts it: "Exhibiting a sequence of precedential concept applications-by-integration as expressively progressive - as the gradual, cumulative making explicit of reality as revealed by one's current commitments, recollectively made visible as having all along been implicit - shows the prior, defective commitments endorsed, and conceptual contents deployed, as nonetheless genuinely appearances representing, however inadequately, how things really are." The expressive part is odd since it wasn't explained how the integration of new judgments makes explicit anything implicit in the old commitments. One isn't putting them in propositional form, just rejecting, extrapolating or justifying. In MIE this phrase would have had a definite meaning but in the context of Hegel it is unclear what is happening and surely he is not importing the whole theory of MIE into his view of Hegel. The progressive part is also odd since there doesn't seem to be any reason why one should think that a new integration should make everything clearer while obscuring nothing. It seems likely that new judgments could clarify certain things but require us to give up nice explanations or understandings of certain phenomena. An example would be something like Galileo providing a lot of new explanatory material for certain things but not having any explanation of inertia to replace the Aristotelian one he rejected. (If this story is wrong, there should be something analogous out there.) Why could concepts not take two steps backwards to get one step forward? One might say that once you have the truth you don't have to let it go, but that sort of picture seems to be rejected here. One might be an early adopter of the concept of atoms, then be forced to give it up by reasonable arguments, then come back to the concept of atoms later on once the other non-atomic concepts have fallen apart. The claim might be that rather than each individual integration being progressive, it is only expressively progressive in the long run. This seems slightly more reasonable although still mysterious.
One thing that was raised during the question period by Anil Gupta, which I found very interesting, was a complaint that Brandom's version of idealism made it mysterious what Moore and Russell's complaints about idealism were aiming at. I don't know what Russell's complains were. I thought he just abandoned Kantian views of math and such wholesale. Moore, apparently, had arguments against the idealistic thesis of the world's dependence on the mind. This goes missing in Brandom's story of idealism. It brings out how different this version of idealism is from the older versions that seemed to have more teeth. Interestingly, the response from Brandom was that those complaints were based on a misunderstanding of Kant and Hegel. The question this leaves us with is what is so idealist about this idealism. Is it the lack of reliance on the notion of experience? Is it the story about concept formation and change? The notion of phenomena? At the end of all this the notion of idealism, whose animating ideas were supposedly laid bare, remains somewhat obscure. I'm not sure what to make of this question either. It highlights the unorthodox nature of the interpretation, but this was also made explicit up front. It would have been nice to hear more about it. I'm not sure what sort of idealism is being endorsed in this story.
Posted by Shawn at 8:34 PM
Saturday, January 12, 2008
The second of the Woodbridge lectures covered a lot of both Kant and Hegel. In this one, Brandom started with the familiar story about how Kant gives a positive conception of freedom arising from binding oneself by norms. I'm most interested in a fairly narrow subsection of this talk. Binding oneself by norms is part of the Kantian conception of autonomy. You bind yourself by norms by recognizing those norms as binding. The question then arises as to the source of the authority of these norms. I'm not sure I understand exactly what happens here though. The Hegelian response to the Kantian view of autonomy is to say that the force of normative statuses is instituted by the normative attitudes of the members of the community. One is part of a community in virtue of a reciprocal recognition by members of the community.
The sketch of reciprocal recognition given in the lectures made the view of community seem too tidy, the relevant people seeing all the other relevant people as being members of the same group. I wondered what Brandom/Hegel would say about situations such as this. Camus was an existentialist for a while (right?), and others took him to be one. Then he decided he wasn't but many people still took him to be one. Now he is still regarded as an existentialist, a part of that community's legacy, even though he refused to recognize this. Brandom said that the individual attitudes were necessary but not sufficient while the joint attitudes of everyone are sufficient. I guess this would mean that Camus would not be a member of the community, at least not until his attitudes ceased with death. A question was also raised about the degree to which one could opt in or out of a community. The discussion seemed to quickly move the scope of the community to the whole of concept-using humanity, but I'm not sure how it moved there so quickly.
The reciprocal recognition found in the Hegelian explanation of Kantian autonomy does a funny thing. There is no sense of the individual autonomy found in the social, reciprocal recognition picture. Brandom didn't seem to find this bothersome. The Kantian autonomy is still there, in the sense of norms binding one only when one takes the norms as binding. At least, that is how Brandom responded when I asked him about it. I'm not completely comfortable with his answer though. I'm not sure if it is the reciprocal part of the reciprocal recognition that insures that the Kantian sort of autonomy remains. It is also a little unclear to me what the bold, individual sort of autonomy is that goes missing on the Hegelian picture, once one zooms out to the level at which the community comes into view. I think it is the idea that one can get oneself into whatever normative status one wants. That sounds sort of like a normative "Humpty-Dumptyism" though.
Posted by Shawn at 8:10 PM
Brandom finished giving his Woodbridge lectures "Animating Ideas of Idealism" here at Pitt, so I thought I'd write up some brief reflections on them. The other places I've seen comments on the Woodbridge lectures are on blogs that are more informed about German idealism than I am. I won't let that stop me though. I'm just going to comment on some of the things I found most interesting. The first lecture was primarily on Kant. A lot of the philosophical vocabulary used in this lecture was Brandomian, not Kantian, as with all his historical work.
One of the points he emphasized was the contribution to semantics that Kant made. This was in large part his rejection of the traditional view of predication. This story has been told by Brandom in a few places, and it seems like an accurate one. A novel feature of the lecture was the three-part process of synthesizing judgments. These three steps are critical, ampliative, and justificatory. The first is the rejection of some claims when incompatibilities arise. The second is drawing conceptual consequences from what is believed. The third is, well, justifying things believed in terms of other things. The process of synthesizing judgments into a unity is integrating the judgment into a unity of apperception. This has a distinctively Brandomian flavor to it since the three-part process involves incompatibilities, commitments, and entitlements, to use a slightly different philosophical vocabulary. This, and other bits of the lecture, made his Kant sound a lot like his Hegel, which in turn sounds a lot like him. He says he found his views in Hegel though. I think someone pressed him on this at one point in the questions and he admitted that he was looking at things through Hegelian spectacles. He said he was going to justify this sort of historical enterprise in the third lecture but it turned out to be a repetition of the stuff on "bebop history" from the opening sections of Tales of the Mighty Dead.
One more troubling thing for his account was brought out during the questions. He views objects as things that "repel" incompatible properties with alethic modal force. He uses the notion of incompatibility to arrive at the objects via a kind of triangulation, that is, "A is a fox" and "A is a dog" are only incompatible if they are about the same object; "A is a fox" and "B is a dog" need not be incompatible. This way of putting things seems to presuppose the object/property distinction already to make sense of incompatibilities. If one starts with incompatibilities and judgments with no structure, one can work out structure that breaks into something like this form. This point was pushed since it isn't clear that one can always work out a way of breaking things up into subsentential bits that results in enough, unique, objects. The inferential "equations" might not yield a unique solution in terms of objets. This seems like a big problem. Brandom has a paper where he gives some necessary and sufficient conditions on taking a bunch of inferences with incompatible propositions labeled, and working out which are talking about the same things. [Edit: The paper is, I believe, "Singular terms and sentential sign designs" in Philosophical Topics 15.] However, this requires some big assumptions that he was hesitant to attribute to Kant and Hegel. There currently isn't a weaker set of conditions that would guarantee the success of this sort of process although Brandom seemed to think it could be done. This seemed like an interesting project to work out. If this problem could be solved one way or the other it could provide a lot of support for or arguments against a sort of inferentialism.
Posted by Shawn at 3:41 PM
Friday, January 11, 2008
This might be common knowledge amongst the people who are in the know, but I just found out about this. Paul Spade at Indiana has a website full of stuff on medieval logic and philosophy of language. This includes a full manuscript he wrote on the late medieval views on these things, including material on Buridan and Occam. He also has a lot of translations of relevant material up. I've read through the first two chapters of his book and it looks like it will be informative and interesting. Chapter two has a brief overview of the development of logic from Aristotle and the Stoics up to the 13th century. It also has a cute picture of the dragon of supposition.
Wednesday, January 09, 2008
This weekend Brandom is giving his Woodbridge lectures again. This time around they will be at Pitt and spread over two days. I'll post thoughts on them afterwards. I'd take a shot at liveblogging them, but, really, I don't foresee that ever working well for philosophy, at least for me.
Posted by Shawn at 3:46 PM
Sunday, January 06, 2008
In an introductory article on forcing, Timothy Chow mentions something he calls "exposition problems," which are the problems of presenting some material in such a way that it is perspicuous, clear, explained, and learnable. He thinks that forcing presents an open exposition problem. I just read through Ramberg's Donald Davidson's Philosophy of Language and it goes a way towards an answer to the exposition problem for Davidsonian philosophy of language. With the exception of the incommensurability chapter towards the end, it is remarkably clear and quite helpful. I'm not sure if it would be perspicuous to someone coming to it without having read at least some of the Davidson articles. If you have read them it does a good job of displaying the unity of Davidson's thought on language which is not always apparent when, say, one juxtaposes "Truth and Meaning" and "A Nice Derangement of Epitaphs". Ramberg isn't doing straight Davidson exposition though and the volume of quotation is rather meager. He does succeed in presenting Davidson's ideas in a coherent, unified, perspicuous manner that, at least for me, made things gel. One of the things that he emphasized is that interpretation is a process that is supposed to result continuously in the revision of theories of truth rather than a single theory. This is maybe easier to see in "Nice Derangement" than the early stuff. I don't know if Davidson made this explicit anywhere though. Anecdotally, I heard someone say that Davidson endorsed this book as a better explanation of his theory than he ever gave.
I came across something while reading this that reminded of a claim Davidson makes which I've never quite gotten. He claims that in order to interpret someone you have to treat their beliefs as mostly true. Since beliefs are mostly true there isn't the possibility of systematic error of the kind skepticism points to. Ramberg didn't say much about this that clarified why this is so. He may have said some things in relation to the principle of charity that are relevant and I suspect there is a connection to his rejection of the principle of humanity (aim to maximize intelligibility rather than agreement). However, it seems like if I ran into a modern Don Quixote, who took cars to be metal horses and who took my apartment to be a castle and me to be a coffee bean, I could interpret his (bizarre) behavior even though it seems like most everything he says is false. It may take a little while for enough of his knights errant tale to come out, but it seems like his speech would be interpretable. Despite the fact that most of what he says is false, one would be able to work out the ways in which it is false, thereby making sense of him. Maybe the idea is supposed to be that there is a lot more that he believes that is true, or at least that you take to be true, that is semantically connected to what he says, though not made explicit in his speech behavior (possibly implicit in his nonverbal behavior). This other stuff must, for the most part be true, in order for us to make sense of him. But if my Don is under the impression that he is floating above the surface of Mars, many of these background beliefs go false too. It seems like I'd be able to interpret him, with some difficulty, yet his beliefs are systematically mistaken. I don't think I could interpret him if I didn't take him ah treating most of his beliefs as true. This, however, isn't what Davidson claims. He thinks that it would be impossible to interpret someone unless you treated them as having mostly true beliefs. So, I am stuck.