Saturday, September 06, 2008

Classical Howard-Curry

I've been reading Lectures on the Howard-Curry Isomorphism by Sørensen and Urzyczyn recently. I wanted to comment on one of the interesting things in it. Read more

Briefly, the Howard-Curry isomorphism is an isomorphism between proofs and λ-terms. In particular, it is well developed and explored for intuitionistic logic and typed λ calculi. It makes good sense of the BHK interpretation of intuitionistic logic. The intuitionistic conditional A→ B is understood of a way of transforming a proof of A into a proof of B. The λ-abstraction of a term of type B yields a function that, when given a term of type A, results in a term of type B. There is a nice connection that can be made explicit between intuitionistic logic and computability.

I'm not sure if I've read anyone using this to argue for intuitionistic logic over classical logic explicitly. Something like this is motivating Martin-Löf, I think. Classical logic doesn't have this nice correspondence. At least, this is what I had thought. One of the surprising things in this book is that it presents a developed correspondence between classical proofs and λ-terms that makes sense of the double negation elimination rule, which is rejected by intuitionistic logic.

Double negation elimination corresponds to what the book terms a control structure. I'm not entirely clear on what this is supposed to mean. It apparently is from programming language theory. It involves the addition to the lambda calculus of a bunch of "addresses" and an operator μ for binding them. It is a little opaque to me what these addresses are supposed to be. When thinking about them in terms of computers, which is their conceptual origin I expect, it makes some sense to think of them in terms of place in memory or some such. I'm not sure how one should think of them generally. (I'm not sure if this is the right way to think of them in this context either.) Anyway, these addresses, like the terms themselves, come with types and rules of application and abstraction. There are also rules given for the way the types of the addresses and the types of the terms interact that involve negations. To make this a little clearer, the rule for address abstraction is:
Γ, a:¬ σ |- M: ⊥, infer
Γ |- (μa: ¬ σ M): σ.
The rule for address application is:
Γ, a: ¬ σ |- M: σ, infer
Γ |- ([a]M): ⊥.
In the above, Γ is a set of terms, M is a term, and the things after the colons are the types of the terms. (Anyone know of a way to get the single turnstile in html?)

The upshot of this, I take it, is that we can make (constructive?) computational sense of classical logic, like intuitionistic, relevance, and linear logics. Not exactly like them, since the classical case requires the addition a second sort of thing, the addresses, in addition to the λ-terms and another operator besides. Assessing the philosophical worth of this depends on getting clear on what the addition of this second sort of thing amounts to. I can't reach any conclusions on the basis of what is given in the book. If the addresses are innocuous, then it seems like one could use this to construct an argument against some of Dummett's and Prawitz's views about meaning. This would proceed along the lines that, despite Dummett's and Prawitz's arguments to the contrary, we can make sense of the double negation elimination rule in terms of this particular correspondence. I don't have any more meat to put on that skeleton because I don't have a good sense of the details of their arguments, just rough characterizations of their conclusions.

There is also a brief discussion Kolmogorov's double negation embedding of classical logic into intuitionistic logic. This is cased out in computational terms. It's proved that if a term is derivable in the λμ calculus then its translation is derivable in the λ-calculus. One would expect this result since the translation shows that for anything provable in classical logic, its translation is provable in intuitionistic logic. It's filling in the arrows in the diagram.

One thing that seemed to be missing in this part of the book was a discussion of combinators. One can characterize intuitionistic logic in terms of combinators. A similar characterization can be done for relevance logic and linear logic. There wasn't any such characterization given for classical logic. Why would this be? The combinators correspond to certain λ-terms. Classical logic requires moving beyond the λ-calculus to the λμ calculus. The combinators either can't express what is going on with μ or such an expression hasn't been found yet, I expect. (Would another sort of combinator be needed to do this?) [Edit: As Noam notes in the comments, I was wrong on my conjecture. No new combinators are needed and apparently the old ones suffice.]

3 comments:

Noam said...

To understand computational interpretations of classical logic, you need to understand the notion of a "continuation". The basic idea is that a continuation corresponds to a proof of ¬A, i.e., a refutation of A, i.e., a function that accepts an A and then does something with it. We don't expect to get a value back from the continuation, which is why the return type is ⊥. Now, suppose you have some expression M : A, i.e., M is a proof of A. M occurs inside some larger program, and (assuming the program is well-typed) necessarily in a context that expects an A. This context is sometimes called "the rest of the computation", since it's supposed to be what the program will execute after evaluating M.

What a control operator lets you do is reify the rest of the computation as a continuation of type ¬A, and access it within M. For example, the type of double-negation-elimination, ¬¬A → A, says that if you can build a continuation for a continuation for A, then you can use that as an A. The term M now has control over its own evaluation---for example, it can apply the continuation multiple times, or not at all. Note that a more common control operator, callcc, actually corresponds to Peirce's law, but the idea is basically the same.

If you want to get a better sense of this correspondence, you can take a look at Griffin's paper. One minor point: there is nothing difficult about doing this with combinators: just add a combinator corresponding to DNE or Peirce's law. This is done, for example, in the combinator-based "programming language" Unlambda.

While this basic intuition gives a computational interpretation of classical logic, the question is, how useful an interpretation? Because of the validity of excluded-middle, it can't have properties such as, "a proof of A∨B is a proof of A or a proof of B", which are essential to Dummett/Prawitz/Martin-Löf meaning explanations. In fact, it turns out that without imposing any other restrictions the proof theory is trivial: all proofs of a proposition are equated (because they can yield the same result under evaluation).

There have been various attempts to fix this problem and give a more satisfactory computational interpretation of classical logic. My paper has one which I think matches well with Dummett's account of meaning. On the other hand, it could be seen as just instantiating the double-negation interpretation of classical logic.

Jonny Cache said...

You may be interested in a perspective on Peirce's law that uses a version of Peirce's own logical graphs.

For example:

Peirce's Law @ ProofWiki

Jon Awbrey

Jon Awbrey said...

Re: "Anyone know of a way to get the single turnstile in html?"

Here's a large list of unicode symbols, not all of which work on all browsers.

The single turnstile is listed as "assertion" (⊦ = ⊦).

Jon Awbrey