I'm pleased to announce the 2009 Pitt-CMU Philosophy Graduate Student Conference.
Keynote speaker: Hartry Field
Theme: Truth, Meaning and Evidence
Facutly speakers: TBA
When: March 28, 2009
Call for papers: The deadline for submisisons is December 1, 2008. More information can be found here.
Also, read the reviews of last year's conference by Ole here and by Errol here.
Monday, September 29, 2008
I'm pleased to announce the 2009 Pitt-CMU Philosophy Graduate Student Conference.
Posted by Shawn at 9:00 AM
Here are some more thoughts on Wittgenstein on the foundations of math. For those interested, be sure to check out the prose interpretation of Wittgenstein on math and games at Logic Matters. In part 6 of the Remarks on the Foundations of Math, Wittgenstein presents several thought experiments to probe a cluster of notions, calculation, proof and rule. One of these is two-minute England in section 34.
Two-minute England is described in the following way. God creates a country in the middle of the wilderness that is physically just like England. The caveat is that it only exists for two minutes. Everything in this new country looks like stuff in England. In particular, one sees some people doing stuff that exactly mimics what English mathematicians do when they do math. This person is the one to focus on. Wittgenstein asks, "Ought we to say that this two-minute-man is calculating? Could we for example not imagine a past and a continuation of these two minutes, which would make us call the process something quite different?"
The questions, I take it, indicate doubt that we must take the two-minute-man as calculating. We can, but it is not compulsory. This is because there is no reason, given what we've observed, to think that he must be calculating. In this case there is no fact about it. We might be tempted to attribute calculation to the two-minute-man because we fill out his story with some events leading up to and following this that lead us to think that he is calculating. These events don't happen, since he only exists for two minutes. There is no wider context for this person that settles whether they were calculating, or scribbling, or regurgitating symbols seen elsewhere.
The point of this thought experiment is to present some evidence that calculation is not identifiable with any bit of mere behavior. Connecting this with other sections of part 6, the behavior only becomes calculation when it is connected up with some appropriate purposes or situated in a normative context in which it is appropriate to talk about correct or incorrect calculation. Wittgenstein's focus is to argue that various mathematical notions are like this.
This passage is preceded by one in which Wittgenstein says, "In order to describe the phenomenon of language, one must describe a practice, not something that happens once, no matter of what kind." The two-minute England thought experiment is intended to illustrate this point. I'm not sure that there is anything in the two-minute-man's life that would let us embed it in a practice of some sort.
Connecting the thought experiment up in a nice way with what precedes it requires fleshing out the notion of a practice, which I can't do. There are scattered remarks on that idea in the Remarks, which I haven't begun to put together. Despite this, it seems to me that two-minute England fits together better with what comes earlier and later in this part of the Remarks, namely that calculation isn't just a matter of behavior. This needs to be connected with the wider concern of what it means to follow a rule in order to make it a bit clearer, I think. Incidentally, I think that the rule-following discussion in the Remarks is more accessible than the discussion in the Philosophical Investigations.
Friday, September 26, 2008
I recently heard some speculation on the origin of the term "Hilbert system" for the axiomatic systems that are used to inflict pain on logic students, particularly at the start of proof theory classes. A long time ago, at least when Gentzen wrote, they were called logistic systems. The first publication in which they were called "Hilbert systems" seems to have been Entailment, vol. 1. Does anyone know if there are earlier uses? I'd be quite happy if that turned out to be the origin of the term.
Posted by Shawn at 8:43 PM
Friday, September 19, 2008
While doing some reading on relevance logic, I came acrossreview of Entailment vol. 2 by Gerard Allwein. It ends by noting that the volume doesn't include recent work on substructural logics and notes that there is an article by Avron that gives details of the relationship between linear logic and relevance logic. Allwein follows this up by saying, "It is noted by this reviewer that linear logic is intellectually anaemic compared to relevance logic in that relevance logic comes with a coherent and encompassing philosophy whereas linear logic has no such pretensions." This was written in 1994. I have no idea to what degree this is still true since I have no firm ideas about linear logic. I'm just discovering the "coherent and encompassing philosophy" that relevance logic comes with. I keep meaning to follow up on this stuff about linear logic at some point...
Since I'm talking about linear logic, I may as well link a big bibliography on linear logic.
Tuesday, September 16, 2008
I came across the following in Entailment vol. 1 trying to get unstuck on some stuff. It is, quite possibly, the snappiest way of putting a theorem of logic I've heard. A possible contender is Halmos's way of putting the crowning glory of modern logic. Here is the theorem:
Manifest repugnancies entail every truth function to which they are analytically relevant.
Anderson and Belnap, being logicians, define manifest repugnancies and being analytically relevant. The former is a conjunction of formulas such that for all atomic formulas p occurring in it, ~p also occurs in it. The latter is defined as A is analytically relevant to B if all propositional variables of B occur in A. This is within the context of the logic E, I think.
Posted by Shawn at 2:02 PM
Sunday, September 14, 2008
In lecture three of Dynamics of Reason, Friedman addresses problems of relativism and rationality. This is needed since he leans so heavily on the picture of science coming out of Kuhn which on its own tends to invite charges of bad relativism. Read more
Friedman thinks that Kuhn's responses to the charge of bad relativism are inadequate. I'm not going to go through them though. Friedman's response begins by noting that Kuhn fails to distinguish between instrumental rationality and communicative rationality, which distinction is suggested by Habermas. Instrumental rationality is the capacity for means-ends reasoning given a goal to bring about, Communicative rationality "refers to our capacity to engage in argumentative deliberation or reasoning with one another aimed at bringing about an agreement or consensus opinion." (p. 54) Friedman says that instrumental rationality is more subjective while communicative rationality is intersubjective. A steady scientific paradigm underwrites, to use Friedman's phrase, communicative rationality. Revolutionary science then seems to threaten communicative rationality. A similar sort of worry arises for Carnap and his multiple linguistic frameworks.
At this point it is unclear to me how Kuhn's failure to distinguish these two kinds of rationality hurt his responses.
Friedman thinks that the scientific enterprise aims at consensus between paradigms as well as within a paradigm. It does this in three ways. One is exhibiting the old paradigm as a special or limiting case of the new paradigm. For example,
Newtonian gravity becomes a special case in relativistic gravitational theory. Friedman sketches a similar exhibition of Aristotelian mechanics as a special case of classical mechanics. This is highly anachronistic but Friedman thinks the proper response emerges when we take a more historical view. He says that the concepts and principles of the new paradigm emerge from the old in natural ways. He thinks it aids nothing to view scientists from different paradigms as speakers of wildly different and incommensurable languages. He says, "In this sense, they are better viewed as different evolutionary stages of a single language rather than as entirely separate and disconnected languages." (p. 60) (Friedman does say this transition is "natural" in a few places, e.g. p. 63. He doesn't say what he means by natural which seems to form the crux of his claim here. I'll come back to this.)
Friedman goes on to say that another way in which inter-paradigm consensus is aimed at is by successive paradigms aiming at greater generality and adequacy. One example is in the move from Aristotelian mechanics to classical, a Euclidean view of space is retained while a hierarchically and teleologically ordered spherical universe is discarded. He unfortunately doesn't address the issues raised by Kuhn on this point that it is hard to say what is meant by generality and adequacy here. Successive paradigms take up many new questions to be sure, but they also discard many old questions and solutions. They may, in fact, change what counts as adequate. This seems like an odd omission at this point in the dialectic.
Friedman reviews these points, that successive paradigms aim at incorporating old paradigms as special cases and that new concepts and principles should evolve out of old. He says that "this process of of continuous conceptual transformation should be motivated and sustained by an appropriate new philosophical meta-framework, which, in particular, interacts productively with both older philosophical meta-frameworks and new developments taking place in the sciences themselves. This new philosophical meta-framework thereby helps to define what we mean ... by a natural, reasonable, or responsible conceptual transformation." (p. 66) This bit of discussion is preceded by a quick sketch of the regulative principles of current science approximating those of "a final, ideal community of inquiry," which apparently has some affinities to Cassirer's view. (I skipped it because it didn't shed light on things for me.) Friedman gives some handwavy descriptions about how philosophical meta-frameworks determine what is a natural transformation, but it is postponed to one of the essays in the second half of the book. Up until this point, philosophical meta-frameworks didn't enter into the discussion, so it seems a bit unmotivated to bring them in here. There were some things left unresolved, but claiming that a philosophical meta-framework resolves them is unsatisfying. The follow up essay might resolve this.
One of the examples of a philosophical meta-framework in action that Friedman gives is the debate between Helmholtz and Poincare on the foundations of geometry against the backdrop of Kantian views. While both Helmholtz and Poincare were philosophical, it doesn't reflect too well, I would think, on their philosopher contemporaries that they didn't produce more (or are cited as producing more) of the philosophical framework. This particular example has a hint of claiming work by those who are more squarely in the camp of mathematical physics for the philosophical camp. I'd rather avoid going into discussions of disciplinary boundaries, but this seems like a weak justification for (or a weak example of a success of, I'm not sure which it is supposed to be) scientific philosophy. Maybe this indicates something that Friedman thinks, namely that philosophers should be more engaged with, perhaps primarily engaged with, doing hard work in the hard sciences.
At the end of the lecture, I am still wondering how the distinction between communicative and instrumental rationality was crucial. The distinction seemed to fade to the background pretty quickly and do little work. Friedman's points about inter-paradigm consensus were similar to ones that Kuhn discusses in Structure, so I'm a bit unclear on how Friedman's were adequate whereas Kuhn's were not. The follow up essays might clear things up, but I don't plan on reading them any time soon.
Friday, September 12, 2008
The second lecture of Dynamics of Reason was much more substantive than the first. There were two main parts, a positive one and a negative one. Read more
The positive part was putting forward what I take to be the most important part of Friedman's lectures, his conception of relativized a priori. I'm not completely clear on it, but it seems like it is supposed to be a combination of Kantian and Carnapian perspectives. Kantian because it wants to answer questions about the very possibility of something, in particular how science is possible. Carnapian because it does this through Carnap's general setup of linguistic frameworks. The frameworks come with two different sorts of rules, L-rules and P-rules. L-rules are the mathematical and logical rules for forming and transforming sentences. P-rules are the empirical laws and scientific generalizations.
Following the first lecture, Friedman takes the L-rules to be constitutive of paradigms, in Kuhn's sense. They are the rules of the game, so to speak. This diverges with Kuhn in at least one respect. The L-rules are supposed to be listable whereas Kuhn doesn't think all aspects of a paradigm can be made explicit. Some can, but things like familiarity with equipment and some other sorts of know-how cannot be.
The P-rules are the substance of normal science. The P-rules are treated as internal questions. The L-rules are treated as external questions.
The relativization of the a priori comes in when Friedman notes that some P-rules, some scientific hypotheses, cannot even be formulated, much less be used to describe the world, until some bit of math is available. The math, in the corresponding L-rules, is presupposed by the P-rules. They are required for the possibility of formulating and applying the P-rules themselves, to put it in a Kantian way. One example of this is the calculus and Newton's laws of mechanics. Friedman notes that Newton's laws are, in their mathematical presentation, formulated in terms of the calculus. Additionally they only make sense against the background of a fixed inertial frame, which concept is given in an L-rule.
The claim seems to be that the L-rules of a framework provide the possibility of knowledge, a priori, within the area of that framework. The relativization is supposed to get around Kant's big problem, which was that he tethered his source of a priori knowledge to something that would provide Newton's laws and use Euclidean geometry. Since it is possible to switch between frameworks, and thus switch between L-rules, one can have sources of a priori knowledge, but relativized to a given framework. That said, I'm not sure I'm comfortable this idea of relativized a priori knowledge. Friedman says that it provides a foundation for knowledge, but it seems awfully mutable for something foundational. Presumably we do not change frameworks that often, since we are supposed to see them as akin to paradigms in science.
Friedman's picture is very much like Carnap. I'm not sure why he is saddling it with the Kantian stuff. He presents Carnap as being quite the neo-Kantian, although I don't think I get what that aspect adds to Carnap's view as presented. (I'm also somewhat troubled by the seemingly uncritical wholesale adoption of Kuhn's view of science. The presentation of Friedman's view of the a priori here seems to depend on Kuhn's picture of scientific development being right.)
The presentation of the relativized a priori was the positive part. The negative part consists of an argument against Quine's epistemological holism. The arguments main thrust is that holism can't account for the sort of dependence on, or presupposition of, different parts of math by different empirical laws. Friedman notes that Quine says that empirical content and evidence accrue to mathematical and logical statements as well as occurrence statements and empirical laws. Thus, the elements of the web of belief are all on an evidential (epistemic?) par, so we can treat the web as a big conjunction of statements. There are no asymmetric dependencies in a conjunction, so Quine's view can't account for the impossibility of, say, formulating Newtonian mechanics while rejecting (or merely lacking?) the presupposed math. In Friedman's words, "Newton's mechanics and gravitational physics are not happily viewed as symmetrically functioning elements of a larger conjunction: the former is rather a necessary part of the language or conceptual framework within which alone the latter makes empirical sense." Friedman actually gives this argument twice, once applied to Newtonian mechanics and once applied to relativity theory. The arguments are virtually the same and are on consecutive pages. (Maybe it worked better in lecture form.) Quine's view cannot account for the history of science, the parallel development of mathematical framework with empirical claims.
It seems like the Quinean could agree with Friedman. The view sketched flounders, but it is not Quine's view either. It seems to me something of a strawman. Despite discussing some of Quine's views on revising theory, Friedman concludes that Quine treats the elements of the web of belief, to continue using the metaphor, as parts of one big conjunction. This doesn't seem like Quine's view at all. It is important to Quine that certain things, such as math and logic, are less open to revision and rejection because they provide such utility and unificatory power. They are used in deriving consequences of theory and observation. It seems consistent with the Quinean view, I think because it was the Quinean view, that there be an asymmetry between the math that is needed in the framing of an empirical law and the empirical law. From what is presented, Friedman's objection doesn't cut against Quine.
In a way, this is too bad. The set up is perfect for really testing how Quine's epistemological holism could deal with some hard cases, i.e. detailed case studies from the history of science. Unfortunately, these are used to dismiss the weird conjunctive picture of Quine's holism, which is obviously bad. It would be informative to tell the Quinean story, in more detail, for what is going on with the dependence of Newton's mechanics and gravitation on the calculus. I'm not going to do that now though.
In all, the positive part of the lecture was more convincing, although it was still a bit lacking. There is a follow up essay entitled "The Relativized A Priori" that would probably clear things up. I'm not sure if I'm going to read it though. It depends how lecture three goes.
Thursday, September 11, 2008
I started reading Michael Friedman's Dynamics of Reason. The book is broken into two parts, the original lectures that form the basis for the book and things that came out of discussion of the lectures. I suppose I will reserve judgment on whether to read the latter bits till I've gotten through the main lectures. Read more
The first lecture contained some brief history about how philosophers and scientists came to be two largely distinct groups. The philosophers of Descartes' time would, apparently, have thought it strange to be made to choose between camps. Following Newton, the two groups started to diverge although many were still interested in developments on both sides. Friedman goes on to tell on story on which the conceptual problems coming out of the sciences form the basis of philosophical debate. The first paradigm for this is Kant and his attempt to explain the possibility of Newtonian physics. Kant does this, roughly, by incorporating the basic concepts of Newtonian physics into the form of our intuition, the conceptual scheme with which we attempt to make sense of the world of appearance. This idea is continued later with Schlick trying to set up a similar foundation for relativity theory, except, rather than using an unchangeable, rigid idea like the Kantian form of intuition, he uses Poincare's conventionalism. This is prima facie less rigid.
The story continues forward to Kuhn's characterization of the development of science in terms of scientific revolutions. Friedman makes an interesting observation here. He claims that one of the reasons that Carnap was so enthusiastic about Kuhn's work, which was published in the Encyclopedia Carnap edited, was that he thought Kuhn's normal/revolutionary science distinction lined up with internal/external questions on his framework. Normal science is about developing science within a paradigm, a (more or less) fixed set of rules and ideas by which everyone operates. The framework itself is not in question. These are internal questions. Revolutionary science puts the framework itself in doubt and the search for a new framework begins. The motivation, usefulness and adequacy of different frameworks and theories is questioned. These are external questions, questions about which framework to adopt. This might be old hat, but it had never been clear to me why Carnap had said that he thought Kuhn's work lined up with his own. I was fairly clear on why, say, Hempel would like it. Putting things this way made the affinity with Carnap's views clearer, which was helpful.
Friedman's main complaint about Kuhn seems to be that Kuhn treats philosophy ahistorically, roughly the way that Kuhn accuses philosophers of having treated science. Friedman's project seems like it is to show how, when viewed more historically, developments in philosophy closely parallel developments in science and contribute to it during the revolutionary periods. Philosophy is supposed to provide new conceptual possibilities on which science can draw when the need arises. This provides the sketch of an answer to the problem with which the lecture opens, the relation of the sciences to philosophy.
The setup seems promising. The first lecture was a bit high altitude. There are lots of details that Friedman needs to supply here, like more examples of philosophers providing such conceptual possibilities, as opposed to mathematicians or, say, physicists. The historical narative is engaging, and I like the general motivation for studying the history of philosophy. It seems like it leaves a lot of philosophy in the lurch though. Friedman's ideas don't seem particularly applicable to ethics and aesthetics. I'm not sure what the story is supposed to be for the relation of philosophy to math since the latter is difficult to fit into the Kuhnian mould. Lastly, I'm not sure why we want philosophy to have that role. I suppose it would justify some bits of philosophy to some, but I don't yet have a clear idea of what course of development it would recommend for philosopy as a whole. It seems like it should say something normative.
I'm reading through some of Wittgenstein's lectures on the foundations of mathematics. I'm not real sure what to expect. I thought I'd write up some notes on it as I went along. This is, in my opinion, the only way to read Wittgenstein. I figured I'd post them in case anyone can help shed some light on what is happening or is interested. Read more
Wittgenstein's approach to understanding in the first few lectures (I assume throughout) seems to be that to understand a concept is to be able to use it. Suppose that someone says that they understand the same concept I do and we use it in the same way in several cases but then diverge. Wittgenstein seems to think that this means that we understand it differently, because our uses of the concept diverge. At the start of lecture two, Wittgenstein distinguishes two criteria for understanding. One is where you respond to q question of understanding by responding, "Of course." If asked whether one understands "house",the response will be an affirmative. The other criteria is how the word is used, indicating houses, etc. Wittgenstein seems to cast some doubt on the first criterion of understanding. He seems to be unsure what justifies it except the second criterion.
This is probably related. He thinks that part of understanding what a mathematical discovery is consists in seeing the proof of it. He gives a sample exchange:"Suppose Professor Hardy came to me and said "Wittgenstein,I've made a great discovery. I've found that ..." I would say, "I am not a mathematician, and therefore I won't be surprised at what you say. For I cannot know what you mean until I know how you've found it." We have no right to be surprised at what he tells us. For although he speaks English, yet the meaning of what he says depends upon the calculations he has made." The proof of a mathematical claim isn't an application of the claim. It does demonstrate a use of the concepts involved, which use, I suppose, gives the meaning of the proposition. I should note that it is a little weird for the exchange to talk about a discovery, at least from Wittgenstein's perspective. Near the end of the first lecture he says that he will try to convince us that mathematical discoveries are better described as mathematical inventions.
Why would we need to see the proof of a statement to understand it? The proof provides an illustration of the use of the concepts involved. This begins to shed some light on why different proofs of one proposition are interesting. The proposition has been show to be true once demonstrated, assuming the proof is good, so additional confirmations of this aren't that exciting. What is useful is seeing different ways in which the concepts can be used together. This seems to result in understanding the proposition in different ways. Does it result in different meanings for the proposition? Not sure. It isn't clear what role, if any, the idea of meaning plays in these lectures.
In lecture four, Wittgenstein talks about people who use things that look like mathematical propositions without them being integrated into a wider mathematical context. The example is kind of weird. It is a group of people that measure things with rulers then measure to figure out how much something will weigh. The question is whether they are working with physical or mathematical propositions. Wittgenstein thinks "both" might be a reasonable answer, but he gives a follow-up suggestion that might indicate that this isn't his view. He says that there is a view that math consists of propositions and there is another view that it consists of calculations. It seems like the latter will be his view.
The first few lectures have been a bit difficult to pull together, although going forward a bit, I think I'm getting a better sense of some of the issues. More notes to follow I expect.
Wednesday, September 10, 2008
I'm reading a nice review of Hylton's Quine book on NDPR, and one line struck me. The review says, "Hylton's pivotal interpretative thesis is that Quine -- contrary to widespread opinions -- is basically a systematic philosopher." I found this somewhat surprising since it seems quite hard, to me, to view Quine as a non-systematic philosopher. This is elaborated in the review: "That means, according to Hylton, that his main purpose is constructive rather than negative." I don't think this ameliorates matters. I'm still surprised. Quine has his destructive/negative side, sure, but it is supplemented with an integrated, systematic (for lack of a better adjective) view of the world. One could see Quine as entirely negative if one stopped reading him at "Two Dogmas" but a glance through Word and Object should give hints of the systematic side. I am, consequently, surprised by the opening of the review. Who thought Quine was entirely negative and why?
There is a similarly surprising line, albeit to a lesser degree, in a review written by Fodor of a collection of Davidson's essays. I think it was in the London Review of Books. Fodor says something along the lines of: it turns out that Davidson's thought is fairly unified after all. Maybe it was harder to piece together going forward, when Davidson's work was scattered about a bunch of journals.
Posted by Shawn at 8:46 PM
Saturday, September 06, 2008
In the proof theory class I'm taking, Belnap introduced several different axiomatic systems, their natural deduction counterparts, and deduction theorems linking them. We started with the Heyting axiomatization for intuitionistic logic and the Fitch formulation of natural deduction for it.
The neat thing was the explanation of how to turn the intuitionistic natural deduction system into relevance logic. To do this, we add a set of indices to attach to formula. When formulas are assumed, they receive exactly one index (a set containing one index), which is not attached to any other formulas. The rule for →-In still discharges assumptions, but it is changed so that the set of indices attached to A→B is equal to the set attached to B minus the set attached to A, and A's index must be among B's indices. This enforces non-vacuous discharge. It also restricts what things can be discharged. The way it was glossed was that A must be used in the derivation of B.
From what I've said there isn't anyway for a set of indices to get bigger. The rule for →-Elim does just that. When B is obtained from A and A→B, B's indices will be equal to the union of A's and A→B's. This builds up indices on formulas in a derivation, creating a record of what was used to get what. Only the indices of formula used in an instance of →-Elim make it into the set of indices for the conclusion, so superfluous assumptions can't sneak in and appear to be relevant to a conclusion.
This doesn't on the face of it seem like a large change. Just the addition of some indices with a minor change to the assumption rule and the intro and elim rules. The rule for reiterating isn't changed; indices don't change for it. Reiterating a formula into a subproof puts it under the assumption the subproof, in the sense of appearing below it in the fitch proof, but not in the sense of dependence. The indices and the changes in the rules induce new structural restrictions, as others have noted. We haven't gotten to sequent calculi or display logic, so I'm not going to go into what the characterization of relevance logic would look like in those. Given my recent excursion into Howard-Curry land, I do want to mention what must be done to get relevance logic in &lambda- calculus. A restriction has to be placed on the abstraction rule, i.e. no vacuous abstractions are allowed. This is roughly what one would expect. Given the connection between conditionals being functions from proofs of their antecedents to proofs of their consequents and λ-abstraction forming functions, putting a restriction on the former should translate to a restriction on the latter, which it does.
I've been reading Lectures on the Howard-Curry Isomorphism by Sørensen and Urzyczyn recently. I wanted to comment on one of the interesting things in it. Read more
Briefly, the Howard-Curry isomorphism is an isomorphism between proofs and λ-terms. In particular, it is well developed and explored for intuitionistic logic and typed λ calculi. It makes good sense of the BHK interpretation of intuitionistic logic. The intuitionistic conditional A→ B is understood of a way of transforming a proof of A into a proof of B. The λ-abstraction of a term of type B yields a function that, when given a term of type A, results in a term of type B. There is a nice connection that can be made explicit between intuitionistic logic and computability.
I'm not sure if I've read anyone using this to argue for intuitionistic logic over classical logic explicitly. Something like this is motivating Martin-Löf, I think. Classical logic doesn't have this nice correspondence. At least, this is what I had thought. One of the surprising things in this book is that it presents a developed correspondence between classical proofs and λ-terms that makes sense of the double negation elimination rule, which is rejected by intuitionistic logic.
Double negation elimination corresponds to what the book terms a control structure. I'm not entirely clear on what this is supposed to mean. It apparently is from programming language theory. It involves the addition to the lambda calculus of a bunch of "addresses" and an operator μ for binding them. It is a little opaque to me what these addresses are supposed to be. When thinking about them in terms of computers, which is their conceptual origin I expect, it makes some sense to think of them in terms of place in memory or some such. I'm not sure how one should think of them generally. (I'm not sure if this is the right way to think of them in this context either.) Anyway, these addresses, like the terms themselves, come with types and rules of application and abstraction. There are also rules given for the way the types of the addresses and the types of the terms interact that involve negations. To make this a little clearer, the rule for address abstraction is:
Γ, a:¬ σ |- M: ⊥, infer
Γ |- (μa: ¬ σ M): σ.
The rule for address application is:
Γ, a: ¬ σ |- M: σ, infer
Γ |- ([a]M): ⊥.
In the above, Γ is a set of terms, M is a term, and the things after the colons are the types of the terms. (Anyone know of a way to get the single turnstile in html?)
The upshot of this, I take it, is that we can make (constructive?) computational sense of classical logic, like intuitionistic, relevance, and linear logics. Not exactly like them, since the classical case requires the addition a second sort of thing, the addresses, in addition to the λ-terms and another operator besides. Assessing the philosophical worth of this depends on getting clear on what the addition of this second sort of thing amounts to. I can't reach any conclusions on the basis of what is given in the book. If the addresses are innocuous, then it seems like one could use this to construct an argument against some of Dummett's and Prawitz's views about meaning. This would proceed along the lines that, despite Dummett's and Prawitz's arguments to the contrary, we can make sense of the double negation elimination rule in terms of this particular correspondence. I don't have any more meat to put on that skeleton because I don't have a good sense of the details of their arguments, just rough characterizations of their conclusions.
There is also a brief discussion Kolmogorov's double negation embedding of classical logic into intuitionistic logic. This is cased out in computational terms. It's proved that if a term is derivable in the λμ calculus then its translation is derivable in the λ-calculus. One would expect this result since the translation shows that for anything provable in classical logic, its translation is provable in intuitionistic logic. It's filling in the arrows in the diagram.
One thing that seemed to be missing in this part of the book was a discussion of combinators. One can characterize intuitionistic logic in terms of combinators. A similar characterization can be done for relevance logic and linear logic. There wasn't any such characterization given for classical logic. Why would this be? The combinators correspond to certain λ-terms. Classical logic requires moving beyond the λ-calculus to the λμ calculus. The combinators either can't express what is going on with μ or such an expression hasn't been found yet, I expect. (Would another sort of combinator be needed to do this?) [Edit: As Noam notes in the comments, I was wrong on my conjecture. No new combinators are needed and apparently the old ones suffice.]
Friday, September 05, 2008
Russell, in "Introduction to Mathematical Philosophy," says:
"It has been thought ever since the time of Leibniz that the differential and integral calculus required infinitesimal quantities. Mathematicians (especially Weierstrass) proved that this is an error; but errors incorporated, e.g. in what Hegel has to say about mathematics, die hard, and philosophers have tended to ignore the work of such men as Weierstrass."
The philosophy of math class I'm taking now will probably show me what the work of a mathematician like Weierstrass has to offer philosophers. (I don't really know.) However, this quote made me wonder what errors Russell had in mind. What errors were incorporated into what Hegel said about mathematics? Did Hegel talk about infinitesimals? Russell doesn't specify what the errors are, which is too bad. I hope he doesn't mean the alleged quip about the necessity of the number of planets being 9. Whether or not Hegel actually said that, it is a stretch to call that an error in mathematics.
Posted by Shawn at 9:23 AM
Wednesday, September 03, 2008
I thought I'd post a couple of questions, which I'm sort of looking into, while more substantive posts are still in development.
First question: Are there discussions of linear logic anywhere in the philosophical literature? There's a lot on relevance logic and a lot on intuitionistic logic. I'm not sure where to find stuff on linear logic. Failing that, are there any computer science articles that include philosophical discussions of it that go beyond "premises are like resources"? (That's just something I've seen a lot. It is a helpful though opaque metaphor.) I don't know that I have it in me to work through Girard's stuff yet.
Second question: Are there any philosophical books or articles that talk about formal language theory? (I mean more than just Turing machines.) Brandom's Locke lectures have a short discussion of it, mainly the Chomsky hierarchy, early on, but that falls by the wayside and is laden heavily with Brandom's project. I bet there's something neat of a philosophical bent somewhere in the computer science literature, but I have no clue.
Posted by Shawn at 8:27 AM