tag:blogger.com,1999:blog-154521412014-08-31T20:35:45.931-07:00Words and Other ThingsThis blog has moved to <a href="http://inferential.wordpress.com">http://inferential.wordpress.com</a>.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.comBlogger406125tag:blogger.com,1999:blog-15452141.post-90131254678971272792009-01-13T15:51:00.001-08:002009-01-17T20:29:55.203-08:00Words and other things v2.0After a couple of years using Blogger, I'm tired of the lack of some key blogging functionality. I've decided it is time for a change. I'm moving over to a Wordpress blog. The new URL is <a href="http://inferential.wordpress.com/">inferential.wordpress.com</a>. Unfortunately, indexical.wordpress was taken. I do, however, have the domain <a href="http://www.indexical.net">indexical.net</a>, which will redirect to the new blog. <br /><br />All of my old posts and all of the comments have transferred to the wordpress blog. Unfortunately the links on the author names in the comments didn't transfer. There are also some self-links in the old posts that still refer to the blogspot address. I hope to fix those in the near future. Other than that, things seem to be up and running at the new place, so please update your RSS readers and join me at my new place. <br /><br />[Edit: Please update your blogrolls as well.]<br /><br />This blog will remain up although I've shut down new comments.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-13309033806351279762009-01-11T20:59:00.000-08:002009-01-11T21:06:46.173-08:00Yet more notes on the SearchThere was a lot in the Search on Russell, so I will continue my notes mainly on Russell. <a href="http://indexical.blogspot.com/2009/01/yet-more-notes-on-search.html">Read more</a><br /><span class="fullpost"> <br />One of surprising parts of the book was in the treatment of the meta-/object language distinction. Grattan-Guinness attributes the distinction to Russell. He didn't make it explicitly, although Grattan-Guinness thinks Russell all but said it. The passage that he cites in defense of this is in Russell's introduction to the Tractatus. In talking about the saying/showing distinction, Russell says something about there being an infinity of languages, each language talking about what can show or say in some other language. His comments may have inspired the distinction, although this shouldn't be hard to verify. Did any of the people who initially made the distinction explicit cite the Russell to this effect? Interpreting the Russell in the way Grattan-Guinness proposes seems a little weak, especially given the status of the distinction, nonexistent, in PM. Bernays seems to be cited as the first to make the distinction explicit when he distinguished axioms from rules of inference in 1918. Regardless, Grattan-Guinness takes Russell's comment to undercut the saying/showing distinction in the Tractatus. Carnap, and Goedel elsewhere presumably, similarly undercut the distinction, according to Grattan-Guinness, with the arithmetization of syntax as presented in Logical Syntax. While the book is not focused on Tractatus interpretation, it is unfortunate that the case is presented as so definitively closed on this score. <br /><br />There was a good, albeit brief, section on Polish logicians post-PM. One of the noteworthy parts was on Lesniewski. He picked up on some of the problems in PM, such as the status of ⊦ in '⊦p'. Russell wanted to follow Frege in viewing it as an assertion sign. Lesniewski proposed abandoning assertion, and related notions, which led to some difficulties in the philosophy of the logic of PM. Of course, as has been pointed out in comments here, the notion of assertion in logic has been resurrected by constructive type theorists. I'm not clear on how the type theorsists' view of assertion connects to what is roughly found in PM. I'd like to get clearer on it to see if there is something from Frege/PM that can be salvaged once one has the meta-/object language distinction. Grattan-Guinness does a reasonably good job of conveying what a big deal the distinction was in the development of logic. Grattan-Guinness presents an ample selection of claims from PM which cry out for the distinction. <br /><br />Grattan-Guinness makes some odd comments about Russell in places. The oddest of which is the following from p. 443: "The emphasis on extensionality [in PM 2nd ed.] hardly fitts well with a logicism in which, for example, non-denumerability is central." This seems odd to me because I'm not sure what non-denumerability has to do with extensionality. They seem to be orthogonal. Similarly odd comments about extensionality were made in the context of discussing the Tractatus. I didn't note any, but I expect there were some made in presenting Quine's extensional revamping of PM for his Ph.D. thesis. <br /><br />The discussion of post-PM logic is somewhat short, since it is mainly to tie up loose ends from the PM period. Carnap and Quine are treated very swiftly. If one listened only Grattan-Guinness, one might think that the main contribution of Carnap was to write Logical Syntax and realize what a mistake it was. Quine was passed over, mainly talking about how his book Mathematical Logic failed to talk about the incompleteness theorems. For some reason, the incompleteness theorems were called the incompletability theorems, an apt name that I've never come across before. Goedel was given a short and rightfully glowing treatment. The discussion of the incompleteness theorems was a bit shallow, since Grattan-Guinness's main point concerning them was that they refuted Hilbert's program. He speculated that no one at the conference at which Goedel presented the first theorem grasped its significance. This is, from what I'm told, false, since von Neumann was there and apparently suggested to Goedel that he look for an arithmetic statement of the appropriate form, suggesting he did have an idea of what was going on. (I don't have a reference for this handy, but I could probably track it down.)</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com1tag:blogger.com,1999:blog-15452141.post-46725555187699167952009-01-09T13:55:00.000-08:002009-01-09T13:59:12.999-08:00Even more notes on the SearchA large part of the Search for Mathematical Roots focuses on Russell and the development of Principia (PM, hereafter). I found these chapters, which roughly comprise the latter half of the book, to be quite helpful since I'm less familiar with Russell than Frege and Wittgenstein and the chapters do a good job of explaining the influence of PM. In an interesting bit of trivia, Grattan-Guinness says that its name is a nod, not to Newton's book, but to Moore's Principia Ethica, Moore being a huge influence on Russell's philosophical development. Grattan-Guinness makes Russell out to be heavily influenced by Peano. In the historical narrative, Russell's interests seem to change along the same lines that Peano's do. An exception to this is that Russell maintains that math is a part of logic whereas Peano thinks they merely overlap. Peano even wrote a paper on "the" in which he gave the same principles for its meaning that Russell did in "On Denoting." According to Grattan-Guinness, Russell seems to have been familiar with that paper, at least reading it once, but seems to have forgotten about it. <a href="http://indexical.blogspot.com/2009/01/even-more-notes-on-search.html">Read more</a><br /><span class="fullpost"> <br />There are two sections devoted to Wittgenstein in the midst of the Russell chapters. The first is on Russell's early engagement with Wittgenstein, while he was writing the Theory of Knowledge. This section is exceedingly short, for a few reasons. One is that Grattan-Guinness wants to focus on the history as it relates to math and logic more than to epistemology and a lot of Wittgenstein's criticisms were directed towards the epistemology of that book. Grattan-Guinness does point out that Wittgenstein, not being a logicist, had problems with Russell's logicism infecting his logic. This was an oddly sharp point about Wittgenstein, since the rest of the Wittgenstein commentary is somewhat lame. The other section on Wittgenstein is an overview of the TLP that stretches over 4 or so pages. This is surprisingly long given the apparently minor role it plays in the story. The interpretation of the TLP presented would not be helpful to anyone not already familiar with the book. He presents Wittgenstein as a "logical monist" without explaining this term. The rest of the exposition of the TLP is unhelpful. Its main purpose seems to be to set the stage for talking about things that Grattan-Guinness thinks Russell and Carnap got right with respect to logic. Oddly, later on Grattan-Guinness says that Quine is a logical monist, possibly in the same sense that Wittgenstein was, although this is hard to discern (and almost certainly false) since it isn't explained adequately later on either. <br /><br />One gem of these chapters came out of the section on Russell's interactions with Norbert Wiener. Wiener came up with an early version of the modern definition of the ordered pair, a definition absent from PM. This was interesting since Grattan-Guinness points out, quite nicely, all the things that were absent from PM, such as a definition of ordered pair in the modern sense. The gem is Wiener's thesis. He wrote on the differences between the algebraic and mathematical traditions of logic, the latter being that of Frege and PM. (The name, "mathematical logic" seems to me to be not altogether happy, since the algebraic tradition was similarly mathematical. Perhaps it stems from the formulation of parts of mathematics in the axiomatic form of the logic.) His focus was on Schroeder and PM. He never published it because of a cold reception by Russell, not even a survey article. Grattan-Guinness wrote a summary article that I'm interested in tracking down, this topic being a recently developed interest of mine. <br /><br />There is a large chunk of the book dedicated to investigating the influence of and reactions to PM. It seems that many early reactions to it were largely negative. Many people in the kantian tradition of logic were fairly critical of ex falso. It was nice to see a glimpse of the pre-history of relevance logic in the book. C.I. Lewis was pretty critical of it on this score as well. Many critics were skeptical of the philosophical value of all the axiom chopping that goes on in PM. Sounding quite similar to things one hears, in some circles, about contemporary logic, they thought that development in PM was an interesting mathematical exercise that didn't end up illuminating key philosophical topics. The logicism of the book was also heavily criticized, on the bases of the presence of the axioms of infinity and reducibility. Grattan-Guinness points out a criticism that was not made. Large parts of mathematics were not treated in PM, or even sketched, so that it was not clear how or whether those parts could be given a logicist treatment. On the one hand, this seems unfair, since there is only so much that Russell and Whitehead could've done. They showed how to cast a lot of math in the system of PM. On the other hand, the lack of a treatment of some of the mathematics of their time is surely a failing. Presumably one would expect them to respond that, in principle, the rest of the math could be accommodated like the stuff they've already covered.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-88419809720703840492009-01-08T15:12:00.000-08:002009-01-09T13:59:49.793-08:00More notes on the SearchIn <a href="http://indexical.blogspot.com/2009/01/notes-on-search-for-mathematical-roots.html">my previous post on Search for Mathematical Roots</a>, I mentioned that one of the things that Frege complained about, according to Grattan-Guinness, was the overloading of symbols. I want to expand on that briefly in this post. I said that the same thing was done in proof theory. It seems that this is not completely accurate. The situation, from what I can tell from the Search for Mathematical Roots, was that often logicians, especially in the algebraic tradition, would use one symbol to designate many different things. For example, some would use '1' to designate the universe of discourse as well as the truth-value true. The equals sign '=' would do double duty as identity among elements as well as equivalence between propositions. It would also get used to indicate a definition. This would not be so bad, but often there would not be any clear way to tell what sense a sign would be used in, sometimes appearing in one sentence in multiple ways. Things get hairier when quantifiers are added, since the variables quantified over would sometimes be propositions and sometimes not. <a href="http://indexical.blogspot.com/2009/01/more-notes-on-search.html">Read more</a><br /><span class="fullpost"> <br /><br />How is this different than is the case in proof theory? In proof theory symbols are often overloaded in the sense that, in a sequent calculus, the symbol on the left means something different than on the right. Two examples are the comma, behaving conjunctively on the left and disjunctively on the right, and the empty sequence, acting as the true on the left and the false on the right. The difference between proof theory and the situation in the 19th century is that the overloading in proof theory is systematic. One can easily figure out what is going on based on its context. Not so with the algebraists. Grattan-Guinness, and Badesa, indicate different points in proofs in which a symbol slides from, say, a propositional interpretation to one that doesn't have a clear propositional meaning. Often times, it seems, the logicians themselves did not make the distinctions and did not notice that they were slipping between the distinct senses. The overloading in proof theory is benign and useful whereas in the case of the 19th century algebraists it was confusing and sometimes hampered understanding.<br /><br />Other examples are the signs for membership and set-inclusion, also things Frege harped on. Some logicians used membership and inclusion interchangeably. One reason Grattan-Guinness gives is that they were taking membership to be along the lines of the part-whole relationship of mereology, though he didn't use the word "mereology." While for some, this slip was due to unclarity about the concepts involved, this particular instance of overloading wasn't seen as bad. Peano apparently was guilty of it, and Quine, in an article on Peano's contributions to logic, said that he was right do so. One gets an interesting set theory, one Quine liked, if one doesn't distinguish an element from its singleton, and so treating membership the same as single element inclusion.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-25352081323974249202009-01-07T13:35:00.000-08:002009-01-07T13:36:38.834-08:00Do the proofIn Hodges's model theory book, there is a proof that sort of surprised me. The proof is for lemma 9.1.5 and runs as follows. <br /><quote><br />A Skolem theory is axiomatized by a set of ∀<sub>1</sub> sentences and modulo the theory, every formula is equivalent to a quantifier-free formula. Now quote Lemma 9.1.3 and Theorem 9.1.1. <br /></quote><br />What surprised me is the last sentence. It is more like a recipe, telling you what to do. Some other proofs in the book have this character to a small degree, but this one stood out a little. The proof is fine, but, unlike many other proofs, this one seems to encourage the reader to do the proof as well. Is this sort of "more active" style of writing proofs common? I don't think I've come across it much at all in the things I read. I'd expect the last line of the proof to go: "The result then follows from Lemma 9.1.3 and Theorem 9.1.1."Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com2tag:blogger.com,1999:blog-15452141.post-6066838282873941732009-01-03T17:38:00.000-08:002009-01-04T19:19:27.059-08:00Notes on the Search for Mathematical RootsI'm reading Grattan-Guinness's <a href="http://www.amazon.com/Search-Mathematical-Roots-1870-1940/dp/069105858X/ref=sr_1_1?ie=UTF8&s=books&qid=1231125507&sr=8-1">The Search for Mathematical Roots</a>. There is a lot of philosophically interesting material in the book although a decent amount of his commentary on it is not particularly illuminating. Nonetheless, he gives a pretty good sense of the development of certain trends and the development of some concepts. In particular, the development of the algebraic tradition in logic is helpful, especially alongside the first chapter of Badesa's book. He doesn't put as fine a point on it as I'd like though. The presentation of the development of Russell's logicism and his split from his neo-hegelian upbringing is well done. I'm going to write up some notes on the book, which will be spread over a few posts. In this one, I'll focus on a few sections from the middle of the book. <a href="http://indexical.blogspot.com/2009/01/notes-on-search-for-mathematical-roots.html">Read more</a><br /><span class="fullpost"> <br />Frege gets stuck in the middle of the chapter on concurrent developments in math and logic, along with Husserl and Hilbert. Grattan-Guinness is not terribly sympathetic to Frege. He wants to distinguish Frege the mathematician from Frege', the philosopher of language and mathematics that is by his lights mainly a product of 20th century philosophical commentators. He points out Frege's disagreements with some of his contemporaries, such as Cantor and Thomae. The description of the Frege-Hilbert-Korselt exchange was not particularly detailed and he made Hilbert out to be the clear victor. <br /><br />Two things that Grattan-Guinness repeatedly mentions Frege as objecting to were bad definitional practice, i.e. implicit definitions via axiom systems, and unclear use of symbolism, i.e. overloading symbols. Frege preferred explicit definitions, single biconditionals, and, apparently, wrote a lot about people that cited something as a definition but was not of that form. He did not seem to require that definitional extensions be conservative. From what Grattan-Guinness says, there is no mention of eliminability, although that should follow from the explicit definition. I don't remember if Frege anywhere commented on that. I want to say more on the overloading of symbols at another time. It was a practice that was rampant in the early development of algebraic logic, although it is still widespread in proof theory today. It seemed to lead to more confusion back then, possibly because the concepts involved were ill-understood and less clearly formulated. <br /><br />One puzzling thing was that Grattan-Guinness made it sound like Frege was against non-Euclidean geometries for some reason and this formed the basis of his criticisms of Hilbert. I'm not sure about the interpretation of Frege's criticisms, but, from what I've heard, one of Frege's theses was on a non-standard geometry of some sort. This makes it difficult to see why Grattan-Guinness portrays Frege the way that he does. One thing, worth drawing attention to, that Grattan-Guinness does is to point out the limits to the scope of Frege's logicism. He says it is limited to arithmetic, possibly meant to encompass the rational and real numbers, and not at all encompassing geometry, probability or most other areas of math. <br /><br />There was a section on Husserl focusing on his mathematical roots, transition to phenomenology, and exchange with Frege. This section was hard to follow, although I was quite hopeful. I'm not sure the difficulty of this section was due to Grattan-Guinness or due to Husserl. The latter wouldn't be surprising. As near as I can tell, Husserl said nothing interesting about arithmetic. <br /><br />There is a chapter on Peano, his influence and his followers. The thing that stuck out the most for me from this chapter was on the formulation of arithmetic. Grattan-Guinness lists Peano's axioms for arithmetic and notes that the induction axiom is first-order. This fact garnered a couple of sentences in the book but not much more. I was surprised by this since I had been taught that while the formulation we use is a first-order schema, Peano's original version was a second-order axiom. This appears to be false. I wonder what the origin of the "Peano's axiom was second-order" view is.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-89847862840000901662008-12-31T10:35:00.000-08:002008-12-31T10:38:49.031-08:00A few reflections on the fall termI'm slightly late with this, but I'll go ahead with it anyway. A few posts ago I said that I'd come up with some reflections on the term. This is more for my benefit than for the benefit of others, but someone might find it interesting. <a href="http://indexical.blogspot.com/2008/12/few-reflections-on-fall-term.html">Read more</a><br /><span class="fullpost"> <br />This term I took three classes: proof theory with Belnap, philosophy of math with Wilson, and truth with Gupta. They meshed well. I had hoped that I would generate a prospectus topic out of them. I have some ideas, but nothing as concrete as a prospectus topic. I also met regularly with Ken Manders to talk about prospectus ideas. This helped me come up with some promising stuff that I'll hopefully work out some over the next few weeks. <br /><br />I'll start with the proof theory class. This was class number two on the topic. This time around it was with Belnap and it focused on substructural logics, particularly relevance logic. I was happy with that since I've been lately bitten by the relevance logic bug. We did some of the expected stuff and spent a while going through Gentzen's proof of cut elimination in detail. I think I got a pretty good sense of the proof. What I was rather pleased with was the forays into combinators and display logic. We did some stuff with combinators in the other proof theory class, but it was relatively unclear to me, at the time, why. This time the connection between combinators and structural rules was made clear. Display logic is, I think, quite neat. I ended up writing a short paper on some philosophical aspects of display logic. <br /><br />Next up is the the philosophy of math class with Wilson. The focus of the class was on Frege's philosophy of math, particularly as it related to some of the mathematical developments of his time. I'm not sure how much it changed my view of Frege. I think it did make it clear that interpreting Frege is less straightforward than I previously thought. I'm more convinced of the importance of seeing Frege's work in logic in the context of the worries about number systems and foundations going on in the late 19th century. I suspect that the long-term upshot of this class will be what it got me to read. I was turned on to a few books on the history of math, e.g. Gray's and Grattan-Guinness's. There is a lot of material in those books on the development of concepts that is fairly digestible. I'm thinking about writing a paper on concept development in the late 19th century in something like the vein of Wandering Significance. We'll have to see how that goes. Through some of these readings I got a bit stuck on how to characterize the algebraic tradition in logic. There's something distinctive there, especially when compared to Frege's views, and I'd like to have a better sense of what is going on. It seems like it would help with interpreting the Tractatus. <br /><br />Last is Gupta's truth and paradox seminar. We covered hierarchy theories of truth briefly, moved on to fixed point theories, then spent a long while on the revision theory as presented in the Revision Theory of Truth. This class was excellent. I think I got a decent handle on the basic issues in theories of truth. There was a lot of formal work and that was balanced against non-formal philosophical stuff fairly well. I'm doubtful I will write a dissertation on truth theory, but one upshot is that it has given me some good perspective on the notions of content and expressive power. These are treated well in RTT, and the discussion there has helped me generate some ideas that I'm trying to develop. The other upshot is that I got to study circular definitions. I have an interest in definitions anyway, and circular definitions, I've confirmed, are awesome. I wrote a short paper for this class on some things in the proof theory of circular definitions. I think it turned out well. <br /><br />As I mentioned before, I've finished my coursework as of this term. Next term I'll be focusing primarily on working out a prospectus. I'll also be attending a few seminars: category theory with Steve Awodey, philosophy of math with Manders, and the later Wittgenstein with Ricketts. I'm not sure to what degree, if at all, these will figure in my dissertation, but they seem too good to pass up. <br /><br />Blogging has been somewhat light this term, since I got a bit overburdened with regular meetings with professors, finishing some papers, and taking two logic classes. It was fairly productive though. Posting should increase some next term. (There's been a general slow down in the philosophy blogosphere, at least the parts I read, this semester that I'm a little bummed to have contributed to, but maybe things will perk up going forward, or maybe not.) <br /><br />Next term should start well. The classes look good. I'm going to get to spend the next few weeks finishing up some reading and trying to formulate a few thoughts better. In late January, Greg Restall will be giving a talk at Pitt, which should be fun.<br /><br />Also, this was post number 400 for my blog. Between all the announcements and links, that means I've gotten a good 200 or so vaguely philosophical posts online.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-22646234196333873342008-12-30T13:48:00.000-08:002008-12-30T14:05:32.128-08:00Definitions and contentReading the Revision Theory of Truth has given me an idea that I'm trying to work out. This post is a sketch of a rough version of it. The idea is that circular definitions motivate the need for a sharper conception of content on an inferentialist picture, possibly other pictures of content too. It might have a connection to harmony as well, although that thread sort of drops out. The conclusion is somewhat programmatic, owing to the holidays hitting. <a href="http://indexical.blogspot.com/2008/12/definitions-and-content.html">Read more</a><br /><span class="fullpost"> <br />In Articulating Reasons, there is a short discussion of harmony. It begins with a discussion of Dummett on slurs. Brandom says, "If Dummett is suggesting that what is wrong with the concept Boche is that its addition represents a nonconservative extension of the rest of the language, he is mistaken. Its non-conservativeness just shows that it has a substantive content, in that it implicitly involves a material inference that is not already implicit in th contents of other concepts being employed." (p. 71) <br />Being a non-conservative addition to a language means that the addition has a substantive content. It licences inferences to conclusions in which it does not appear that were not previously licenced. I want to point out something that doesn't seem to fit happily within this picture.<br /><br />In the Revision Theory of Truth, Gupta and Belnap present a theory of circular definitions and several semantical systems for them. I will focus on the weakest system, S<sub>0</sub>. According to S<sub>0</sub>, the addition of circular definitions to a language constitutes a conservative extension of the language. That being said, it seems like the introduction of circular definitions brings with it a substantive content, given by the revision sequences and the set of definitions. From the quote, it seems that Brandom is saying that non-conservativeness is sufficient for substantive content, not necessary. We would have a nice counterexample otherwise. If we look at stronger systems, the S<sub>n</sub> systems (n>0), then it turns out that the same set of definitions may not yield a conservative extension of the language. <br /><br />It might be misleading to cast things in terms of the semantical systems since Brandom casts things in terms of inferential role. Gupta and Belnap offer proof systems for the S<sub>0</sub> and S<sub>n</sub> systems. These proof systems are sound and complete with respect to the appropriate semantical system. In the system C<sub>0</sub>, the addition of a set of definitions to a language yields a conservative extension, as would be expected. <br /><br />The fact that circular definitions are conservative in S<sub>0</sub> doesn't upset Brandom's claim above. It doesn't seem like we want to say that all circular definitions lack substantive content. Unlike non-circular definitions that are conservative over the base language and eliminable, they are not mere abbreviations. The addition of circular definitions has semantical consequences in the form of new validities. Circular definitions point out the need for necessary conditions on the notion of substantive content, since one would expect that there are circular definitions that aren't substantive, e.g. one with a definiens that is tautological. Alternatively, a sharper notion of content, and so substantive content, would help clarify what is going on with circular definitions of different stripes.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-12978058074008904932008-12-27T18:52:00.000-08:002008-12-28T15:15:02.194-08:00Two Quinean thingsIn my browsing of Amazon, I came across something kind of exciting. There are two new collections of Quine's work coming, edited by Dagfinn Follesdal and Douglas Quine. They are <a href="http://www.amazon.com/Confessions-Confirmed-Extensionalist-Other-Essays/dp/0674030842/ref=pd_bbs_9?ie=UTF8&s=books&qid=1230432782&sr=8-9">Confessions of a Confirmed Extensionalist and Other Essays</a> and <a href="http://www.amazon.com/Quine-Dialogue-W-V/dp/0674030834/ref=pd_bbs_sr_5?ie=UTF8&s=books&qid=1230432782&sr=8-5">Quine in Dialogue</a>. The former appears to be split between previously uncollected essays, previously unpublished essays, and more recent essays. The latter appears to consist of a lot of lighter pieces, reviews, and interviews. Amazon doesn't seem to have the tables of contents available yet, but they are available at the publisher's page, <a href="http://www.hup.harvard.edu/catalog/QUICON.html?show=contents">here</a> and <a href="http://www.hup.harvard.edu/catalog/QUIQUD.html?show=contents">here</a>. Both look promising for those that are interested in Quine. I'm curious to read Quine's review of Lakatos in the latter volume. It could be wildly disappointing, but it would be nice to see Quine's reaction a philosophy of math that is so at odds with his own. [Edit: In the comments, Douglas Quine points out that more detailed information for the new volumes, as well as information on other centennial events, are up on <a href="http://www.wvquine.org/">the W.V. Quine website</a>.]<br /><br />The other Quinean thing is a question. Is there anywhere in Quine's writings where he discusses the role of statistics and probability in modern science? It seemed like there could be something there that could be used as the beginning of an objection to Quine's fairly tidy picture of scientific inquiry. (This thought is sort of half-baked at this point.) Over the holidays I couldn't think of anywhere Quine talked about how it fit into his epistemological views. It seemed odd that Quine didn't ever discuss it, given the importance of statistics in science, so I'm fairly sure I'm forgetting or overlooking something. There might be something in From Stimulus to Science or Pursuit of Truth, but I won't have access to those for a few days yet. [Edit: In the comments <a href="http://obscureandconfused.blogspot.com/">Greg</a> points out that Sober presented a sketch of a criticism along the lines above in his paper "Quine's Two Dogmas," available for download on <a href="http://philosophy.wisc.edu/sober/recent.html">his papers page</a>.]Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-6298520702923470702008-12-17T22:33:00.000-08:002008-12-28T15:15:53.110-08:00Question about negationsDoes anyone know of any proof systems in which some but not all contents have negations? I'm looking for examples for a developing project.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com6tag:blogger.com,1999:blog-15452141.post-59651479770953512052008-12-17T22:09:00.000-08:002008-12-30T14:05:43.474-08:00Semantic self-sufficiencyI'm trying to work out some thoughts on the topic of semantic self-sufficiency. My jumping off point for this is the exchange between McGee, Martin and Gupta on the Revision Theory of Truth. My post was too long, even with part of it incomplete, so I'm going to post part of it, mostly expository, today. The rest I hope to finish up tomorrow. I'm also fairly unread in the literature on this topic. I know Tarski was doubtful about self-sufficiency and Fitch thought Tarski was overly doubtful. Are there any particular contemporary discussions of these issues that come highly recommended? <a href="http://indexical.blogspot.com/2008/12/semantic-self-sufficiency.html">Read more</a><br /><span class="fullpost"><br />In their criticisms of the revision theory, both McGee and Martin say that a fault of the revision theory is that it is not semantically self-sufficient. The metalanguage for its characterization of truth must be stronger than the object language. Martin puts the point as follows.<br /><blockquote>[Gupta and Belnap] dismiss the goal of trying to understand truth for a language entirely from within the language. Although they point out some problems with the very notion of a universal language... The problem that the semantic paradoxes pose is not primarily the problem of understanding the notion of truth in expressively rich languages, it is the problem of understanding <i>our</i> notion of truth. And we have no language beyond our own in which to discuss this problem and in which to formulate our answers.</blockquote ><br />McGee expresses a similar sentiment. <br /><br />Gupta, in his reply to Martin and McGee, presents the objection in the following way. (I follow Gupta pretty closely here.) (1) a semantic description of English must be possible. (2) This description must be formulable in English itself, i.e. English must be semantically self-sufficient. (3) The revision semantics for a language can only be constructed in a richer metalanguage. (4) Revision semantics is therefore not suitable for English. (5) Therefore, revision semantics doesn't capture the notion of truth in English. The problem Gupta diagnoses is that this takes the aim of the project of investigating truth to be one thing, while he sees it as another. McGee and Martin see the goal as the constriction of of a language L that can express its own semantic theory. Gupta sees the goal as giving a semantics of the predicate "true in L" of L, generally. He calls the former the self-sufficiency project and the latter the truth project. Gupta points out that the truth project, the goal of the revision theory, is independent of the self-sufficiency project, which is not independent of the truth project. (Gupta also gave a partial, positive answer to the self-sufficiency project, for languages that lack certain sorts of self-reference.) Gupta expresses some doubt about the prospects of full success in the self-sufficiency project. In what follows I'll present Gupta's arguments. <br /><br />To set the stage for the doubts, we need to idealize English, or any language, as frozen in some stage in its development. Otherwise there are possibly extraneous concerns that arise about the self-sufficiency at different times and with respect to different times. If we take English to be fixed at some stage of its development, there is a problem about spelling out what conceptual resources it contains and that are at the disposal of the semantic theory. Part of the reason for thinking that English is semantically self-sufficient is that has a great deal of expressive power and flexibility. Expressions can be cooked up to denote any expression or thing one might want. Gupta puts it this way: there are expressions whose interpretations can be varied indefinitely.<br /><br />Gupta poses a dilemma. Either the semantic self-sufficiency of English is due to this flexibility or it is not. If it is, then there is no motivation for semantic self-sufficiency. The self-sufficiency project supposes fixed conceptual resources, so the flexibility of English cannot motivate that project after all. If not, we can suppose that English has fixed conceptual resources. Given that, what reason is there to think that English is self-sufficient? There is no empirical confirmation of this. There doesn't seem to be much by way of a priori reason for thinking it either. In either case, there doesn't seem to be any motivation for taking English to be self-sufficient. He gives one motivation, although the discussion and criticism of that could stand independently of the dilemma posed here. <br /><br />Gupta notes that the thing most often cited in favor of thinking that English is semantically self-sufficient is the "comprehensibility of English by English speakers." This has an ambiguity. In one sense, 'comprehensibility' is the ability to use and understand the language. In another sense, it is the ability to give a systematic semantic theory for the language. The claim is then that English speakers can give a systematic semantic theory for their language. In the former sense, the claim is trivial. In the latter sense, there is not really any reason to believe it. <br /><br />This point seems to me to be allied to thoughts about following rules and norms. One can follow rules quite easily. It is often quite hard to make explicit the rules and norms one is following and how they systematically fit together. If giving a semantic theory involves something like this, then it wouldn't be unexpected that some difficulties would arise. I don't think Gupta has this in mind though.<br /><br />Gupta raises a second worry. Even if the previous one can be overcome, there is the problem of giving a semantic theory for the stage of English in that stage because there is gap between the ability of English speakers and the resources available at a given stage of English. The speakers can, and do, enrich their vocabulary with mathematical and logical resources, and the ability to do this might be intimately bound up with the ability to give a semantics for English. Appealing to the abilities of speakers of a language to motivate semantic self-sufficiency then seems to create a problem. One could idealize the speakers as similarly frozen at a given stage, along with their language, so that no developmental capacities enter into the picture. To claim semantic self-sufficiency here is to simply disregard the previous objection. It is also unclear why one would think that in such a scenario semantic self-sufficiency would obtain.<br /><br />The point of Gupta's criticism is not to refute all hope of semantic self-sufficiency. It is rather to cast doubt on it and its motivations. If he is right, then it shouldn't be taken as a basic desideratum of a theory of truth, or any semantic theory. Then those parts of McGee's and Martin's objections lose their force. I think it also indicates some of the places where claims about semantic self-sufficiency need to be sharpened, which I'll try to address in a post in the near future.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-42963708525733056672008-12-15T21:10:00.000-08:002008-12-15T21:15:31.638-08:00Pointing out a reviewI wanted to avoid having another post that was primarily a link, but I seem to be having some difficulty of getting a post together lately. In any case, there is <a href="http://ndpr.nd.edu/review.cfm?id=14886">a review of Gillian Russell's Truth in Virtue of Meaning</a> up at NDPR. The review seems to be fairly detailed, so I'll let it stand on its own.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-7902119522315176192008-12-10T20:32:00.001-08:002008-12-10T21:06:15.724-08:00I declare victory over courseworkToday I submitted the last paper I needed to finish in order to fulfill my course requirements at Pitt. Now I get to concentrate on finishing up some side projects and working on my nascent prospectus. Yay! <br /><br />An end of term reflection will likely follow in the next week or so.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-85567064246097210042008-12-06T14:14:00.000-08:002008-12-06T14:32:38.184-08:00Representation theorems and completenessThis term I've spent some time studying nonmonotonic logics. This lead me to look at David Makinson's work. Makinson has done a lot of work in this area and he has a nice selection of articles available <a href="http://david.c.makinson.googlepages.com/listofpublications">on his website</a>. One unexpected find on his page was a paper called "Completeness Theorems, Representation Theorems: What’s the Difference?" A while back I had <a href="http://indexical.blogspot.com/2008/03/representation-theorems.html">posted a question about representation theorems</a>. In the comments, Greg Restall answered in detail. Makinson's paper elaborates this some. He says that representation theorems are a generalization of completeness theorems, although I don't remember why they were billed as such. There are several papers on nonmonotonic logic available there. "Bridges between classical and nonmonotonic logic" is a short paper demystifying some of the main ideas behind non-monotonic logic. The paper “How to go nonmonotonic” is a handbook article that goes into more detail and develops the nonmonotonic ideas more. Makinson has a new book on nonmonotonic logic, but it looked like most of the content, minus exercises, is already available in the handbook article online.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-91733682637742308642008-12-04T09:49:00.001-08:002008-12-04T09:53:28.522-08:00BrandomiaThere is now <a href="http://www.pitt.edu/~brandom/multimedia/index.html">a multimedia section</a> up on Brandom's website. It includes the videos of the Locke lectures with commentary as given in Prague as well as the Woodbridge lectures as given at Pitt. I think one of the videos of the latter features a mildly hard to follow muddle of a question by me. If you are in to that stuff, it is well worth checking out.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-91480058089390027212008-11-28T19:57:00.001-08:002008-12-02T22:18:08.432-08:00Just a reminder [deadline extended]The deadline for the <a href="http://www.pitt.edu/~philgrad/index.html">Pitt-CMU conference</a> [edit: has been extended to 12/15. Please submit!]Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-44644737637440288412008-11-25T13:04:00.000-08:002008-11-25T13:09:09.871-08:00Oh, DoverI just found out that Yiannis Moschovakis's <a href="http://www.amazon.com/Elementary-Induction-Abstract-Structures-Mathematics/dp/0486466787/ref=pd_ys_shvl_11">Elementary Induction on Abstract Structures</a> was released as a cheap Dover paperback over the summer. It was previously only available in the horrendously expensive yellow hardback series by... North-Holland, according to Amazon. The secondary literature on the revision theory of truth has recently nudged me into looking at this book, and it is nice to know that it is available at a grad-student-friendly price. Philosophical content to follow soon.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-21464639907124170722008-11-24T20:39:00.000-08:002008-11-25T16:44:57.159-08:00The birth of model theoryI just finished reading Badesa's Birth of Model Theory. It places Löwenheim's proof of his major result in its historical setting and defends what is, according to the author, a new interpretation of it. This book was interesting on a few levels. First, it placed Löwenheim in the algebraic tradition of logic. Part of what this involved was spending a chapter elaborating the logical and methodological views of major figures in that tradition, Boole, Schröder, and Peirce. Badesa says that this tradition in logic hasn't received much attention from philosophers and historians. There is a book, From Peirce to Skolem, that investigates it more and that I want to read. I don't have much to say about the views of each of those logicians, but it does seem like there is something distinctive about the algebraic tradition in logic. I don't have a pithy way of putting it though, which kind of bugs me. Looking at Dunn's book on the technical details of the topic confirms it. From Badesa, it seems that none of the early algebraic logicians saw a distinction between syntax and semantics, i.e. between a formal language and its interpretation, nor much of a need for one. Not seeing the distinction was apparently the norm and it was really with Löwenheim's proof that the distinction came to the forefront in logic. A large part of the book is attempting to make Löwenheim's proof clearer by trying to separate the syntactic and semantic elements of the proof. <br /><br />The second interesting thing is how much better modern notation is than what Löwenheim and his contemporaries were using. I'm biased of course, but they wrote a<sub>x,y</sub> for what we'd write A(x,y). That isn't terrible, but for various reasons sometimes the subscripts on the 'a' would have superscripts and such. That quickly becomes horrendous. <br /><br />The third interesting thing is it made clear how murky some of the key ideas of modern logic were in the early part of the 20th century. <a href="http://www.ucalgary.ca/~rzach/logblog/">Richard Zach</a> gave a talk at CMU recently about how work on the decision problem cleared up (or possibly helped isolate, I'm not sure where the discussion ended up on that) several key semantic concepts. Löwenheim apparently focused on the first-order fragment of logic as important. As mentioned, his work made important the distinction between syntax and semantics. Badesa made some further claims about how Löwenheim gave the first proof that involved explicit recursion, or some such. I was a little less clear on that, although it seems rather important. Seeing Gödel's remarks, quoted near the end of the book in footnotes, on the importance of Skolem's work following Löwenheim's was especially interesting. Badesa's conclusion was that one of Gödel's big contributions to logic was bringing extreme clarity to the notions involved in the completeness proof of his dissertation. <br /><br />I'm not sure the book as a whole is worth reading though. I hadn't read Löwenheim's original paper or any of the commentaries on it, which a lot of the book was directed against. The first two chapters were really interesting and there are sections of the later chapters that are good in isolation, mainly where Badesa is commenting on sundry interesting features of the proof or his reconstruction. These are usually set off in separate numbered sections. I expect the book is much more engaging if you are familiar with the commentaries on Löwenheim's paper or are working in the history of logic. That said, there are parts of it that are quite neat. Jeremy Avigad has a review on his website that sums things up pretty well also.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-64940159405208076482008-11-16T09:00:00.000-08:002008-11-16T09:17:18.778-08:00Another plea for a reprintWhile I am <a href="http://notofcon.blogspot.com/2008/07/plea-for-reprint.html">coming to the pleas for reprints late</a>, it occurred to me that it would be very nice to have a Dover reprint of the two volumes of Entailment. Of course, I wouldn't complain if Princeton UP issued a cheap paperback version. They are out of print and are individually prohibitively expensive. There also are not enough copies of volume 2 floating around. It can be hard to get one's hands on volume 2 around here, which is unfortunate since I've lately needed to look at it.<br /><br />[Edit: Looking at the comments on the thread I linked to, it also strikes me that it'd be nice to have a volume on the theme of logical inferentialism. It would have reprints of Gentzen's main papers, some of Prawitz's stuff, Prior's tonk article, Belnap's reply and his display logic paper, an appropriate smattering of stuff from Dummett and Martin-Loef, possibly some of the technical work done by Schroeder-Heister, Kremer's philosophical papers on the topic, Hacking's piece, and some of Read's and Restall's papers. I'm sure there are others that could go into it, although I think what I've listed would already push it into the two volume range. Dare to dream...]Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com1tag:blogger.com,1999:blog-15452141.post-3598582082586871452008-11-14T17:05:00.000-08:002008-11-16T20:53:51.135-08:00Combinatory logicThere's what appears to be a nice <a href="http://plato.stanford.edu/entries/logic-combinatory/">article on combinatory logic</a> up at the Stanford Encyclopedia, authored by Katalin Bimbo. The article briefly mentions Meyer's work on combinators, and it talks about the connection between combinators and non-classical logics. However, it doesn't seem to make explicit the connection between combinators and structural rules in a sequent calculus, which Meyer calls the key to the universe. <br /><br />Bimbo's website notes that she recently wrote a book with Dunn on generalized galois logics, which looks like it extends the last two chapters of Dunn's algebraic logic text. I'd like to get my hands on that. Time to make a request to the library...Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com2tag:blogger.com,1999:blog-15452141.post-26195625850172593832008-11-08T17:46:00.000-08:002008-11-08T17:59:10.558-08:00A note on expressive problemsIn chapter 3 of the Revision Theory of Truth (RTT), Gupta and Belnap argue against the fixed point theory of truth. They say that fixed points can’t represent truth, generally, because languages with fixed point models are expressively incomplete. This means that there are truth-functions in a logical scheme, say a strong Kleene language, that can not be expressed in the language on pain of eliminating fixed points in the Kripke construction. An example of this is the Lukasiewicz biconditional. Another example is the exclusion negation. The exclusion negation of A, ¬A , is false when A is true, and true otherwise. The Lukasiewicz biconditional,A≡B , is true when A and B agree on truth value, false when they differ classically, and n otherwise. <a href="http://indexical.blogspot.com/2008/11/note-on-expressive-problems.html">Read more</a><br /><span class="fullpost"><br /><br />The shape of this argument seems to be the following. The languages we use appear to express all these constructions. If they don’t, we can surely add them or start using them. The descriptive problem of truth demands that our theories of truth work in languages that are as expressive. (Briefly, the descriptive problem is the problem of characterizing the behavior of truth as a concept we use, giving patterns of reasoning that are acceptable and such.) The fixed point theories prevent this, therefore they cannot be adequate theories of truth. <br /><br />I’m not sure how forceful this argument is. I’m also not quite sure how damaging expressive incompleteness is. The expressive incompleteness at issue in the argument is truth-functional expressive incompleteness. There is lots of expressive incompleteness present in the languages under consideration. This is distinct from the language expressing its own semantics. The semantic concepts required for that need not be present since there will presumably be non-truth-functional notions used. It also isn’t part of the claim with respect to the languages we use. I will stop to discuss this point for a moment since I find it interesting. <br /><br />The languages we use may or may not be able to express their own semantics. As Gupta says, rightly I think, one should be suspicious of anyone who claims that we must be able to express our semantic theories in the languages they are theories for. The primary reason for this is that we don’t know what a semantic theory for the complete language would be. The extant semantic theories we have work for small fragments that are regimented highly. Further, these theories are only defined on static languages, whereas the ones we use appear to be extensible. Additionally, these theories tend to be coupled to a syntactic theory that provides the structure of sentences on which the semantics recurs. There is no such syntactic theory for the languages we use either. The shape a semantic theory for our used language might be very different than the smaller models currently studied. It might not even contain truth. The requirement that a language be able to express its own semantic theory seems to stem from an idealization based on current semantic theories that, if the above is right, is illicit. The question of expressive completeness is distinct from this question of semantics. The question of what it is to give a semantics for a language in that language is interesting, and is raised in criticisms by both McGee and Martin. I hope to post on that soon.<br /><br />One question that strikes me is how central to the descriptive problem is this expressive power? Expressive power itself is a notion that is somewhat obscure until one moves to a formal context in which one can tease apart distinctions. For example, it isn’t at all apparent that ‘until’ isn’t expressible using the standard tense operators or constructions, even though all these are, arguably, readily apparent in the languages we use. It isn’t clear, then, that the notion of expressibility is even workable until we move to a more theoretical setting from the less theoretic setting of language in use.<br /><br />If we move to a more theoretical setting and discover that what we thought was vast expressive power has to be curtailed, then it isn’t clear that our earlier intuition is what must be preserved. One could hold out for a theory of truth that preserved it. Gupta clearly thinks this is one to hold on to. Perhaps this is what a detailed statement of the descriptive problem demands.<br /><br />Something else that I wonder about this line of thought is how common expressive incompleteness, of the truth-functional kind, is among the most prominent logical systems. We have it in the classical case. In limited circumstances, we have it even with the addition of the T-predicate. In any case, we probably don’t want to stop with just logics that treat only truth-functions and T-predicates. We might want to add modal operators of some kind, and these are not truth-functional. What sort of expressive problems are generated, or not, then? I'm not sure, although there is an excellent chapter in RTT on comparing the expressive power of necessity as a predicate and as a sentence operator.</span>Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-76340750891012544402008-11-04T16:56:00.000-08:002008-11-04T16:58:14.636-08:00PSAI'm going to be in Pittsburgh this weekend. If any readers or fellow bloggers are going to be in town for the PSA and want to meet up drop me a line. There are at least a few people I'm hoping to meet.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-86943624846492200312008-10-30T22:04:00.000-07:002008-10-30T22:05:52.606-07:00A note on the revision theoryIn the Revision Theory of Truth, Gupta says (p. 125) that a circular definition does not give an extension for a concept. It gives a rule of revision that yields better candidate hypotheses when given a hypothesis. More and more revisions via this rule are supposed to yield better and better hypotheses for the extension of the concept. This sounds like there is, in some way, some monotony given by the revision rule. What this amounts to is unclear though.<br /><br />For a contrast case, consider Kripke's fixed point theory of truth. It builds a fixed point interpretation of truth by claiming more sentences in the extension and anti-extension of truth at each stage. This process is monotonic in an obvious way. The extension and anti-extension only get bigger. If we look at the hypotheses generated by the revision rules, they do not solely expand. They can shrink. They also do not solely shrink. Revision rules are non-monotonic functions. They are eventually well-behaved, but that doesn't mean monotonicity. One idea would be that the set of elements that are stable under revision monotonically increases. This isn't the case either. Elements can be stable in initial segments of the revision sequence and then become unstable once a limit stage has been passed. This isn't the case for all sorts of revision sequences, but the claim in RTT seemed to be for all revision sequences. Eventually hypotheses settle down and some become periodic, but it is hard to say that periodicity indicates that the revisions result in better hypotheses. <br /><br />The claim that a rule of revision gives better candidate extensions for a concept is used primarily for motivating the idea of circular definitions. It doesn't seem to figure in the subsequent development. The theory of circular definitions is nice enough that it can stand without that motivation. Nothing important hinges on the claim that revision rules yield better definitions, so abandoning it doesn't seem like a problem. I'd like to make sense of it though.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com3tag:blogger.com,1999:blog-15452141.post-34305141964269013592008-10-24T21:17:00.000-07:002008-10-24T21:25:13.811-07:00Question about the theories of models and proofsI seem to have gone an unfortunately long time without a post. I'm not quite sure how that happened. I'll try to get back on the wagon. <br /><br />I have a question that maybe some of my readers might be able to answer. What are some good sources of either criticisms of proof theoretic semantics/proof theory from the model theoretic standpoint or criticisms of model theoretic semantics/model theory from the proof theoretic standpoint? I'm sure there is a lot out there for the former that references the incompleteness theorems. I'm not sure where to look for the latter. From <a href="http://notofcon.blogspot.com/2008/08/european-phenomenon.html">a post at Nothing of Consequence</a> I've found a paper by Anna Szabolcsi that spells out a few things. Beyond that, I'm not terribly sure where to look.Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0tag:blogger.com,1999:blog-15452141.post-46573379564333830092008-10-10T16:55:00.000-07:002010-02-08T18:55:50.039-08:00Cut and truthMichael Kremer does something interesting in his "Kripke and the Logic of Truth." He presents a series of consequence relations, ⊧<sub>(V,K)</sub>, where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation ⊧<sub>(V,K)</sub>, where V is the strong Kleene scheme and M is the class of all fixed points. He proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.<br /><br /><br />As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that 'A' is the quote name of the sentence A.) T'A' is atomic no matter how complex A is. ∃xTx is more complex than T'∃xTx' despite the latter being an instance of the former. Woe for the standard method of proving cut elimination. <br /><br />Kremer's method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I'm not sure this saves on work, per se. <br /><br />Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction. <br /><br />Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as "a detour through semantics." He doesn't show that a proof theory alone will be unable to prove the result, just that the standard method won't do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel's incompleteness results. There are some differences though. Goedel's results are for systems much stronger than what Kremer is working with. Also, one doesn't get cut elimination for the sequent calculus formulation of arithmetic. <br /><br />Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I'm not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn't entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don't have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic's proof theory and model theory that might be helpful...Shawnhttp://www.blogger.com/profile/15244930958211791213noreply@blogger.com0