In the Revision Theory of Truth, Gupta says (p. 125) that a circular definition does not give an extension for a concept. It gives a rule of revision that yields better candidate hypotheses when given a hypothesis. More and more revisions via this rule are supposed to yield better and better hypotheses for the extension of the concept. This sounds like there is, in some way, some monotony given by the revision rule. What this amounts to is unclear though.

For a contrast case, consider Kripke's fixed point theory of truth. It builds a fixed point interpretation of truth by claiming more sentences in the extension and anti-extension of truth at each stage. This process is monotonic in an obvious way. The extension and anti-extension only get bigger. If we look at the hypotheses generated by the revision rules, they do not solely expand. They can shrink. They also do not solely shrink. Revision rules are non-monotonic functions. They are eventually well-behaved, but that doesn't mean monotonicity. One idea would be that the set of elements that are stable under revision monotonically increases. This isn't the case either. Elements can be stable in initial segments of the revision sequence and then become unstable once a limit stage has been passed. This isn't the case for all sorts of revision sequences, but the claim in RTT seemed to be for all revision sequences. Eventually hypotheses settle down and some become periodic, but it is hard to say that periodicity indicates that the revisions result in better hypotheses.

The claim that a rule of revision gives better candidate extensions for a concept is used primarily for motivating the idea of circular definitions. It doesn't seem to figure in the subsequent development. The theory of circular definitions is nice enough that it can stand without that motivation. Nothing important hinges on the claim that revision rules yield better definitions, so abandoning it doesn't seem like a problem. I'd like to make sense of it though.

## Thursday, October 30, 2008

### A note on the revision theory

Posted by Shawn at 10:04 PM 3 comments Links to this post

## Friday, October 24, 2008

### Question about the theories of models and proofs

I seem to have gone an unfortunately long time without a post. I'm not quite sure how that happened. I'll try to get back on the wagon.

I have a question that maybe some of my readers might be able to answer. What are some good sources of either criticisms of proof theoretic semantics/proof theory from the model theoretic standpoint or criticisms of model theoretic semantics/model theory from the proof theoretic standpoint? I'm sure there is a lot out there for the former that references the incompleteness theorems. I'm not sure where to look for the latter. From a post at Nothing of Consequence I've found a paper by Anna Szabolcsi that spells out a few things. Beyond that, I'm not terribly sure where to look.

Posted by Shawn at 9:17 PM 0 comments Links to this post

## Friday, October 10, 2008

### Cut and truth

Michael Kremer does something interesting in his "Kripke and the Logic of Truth." He presents a series of consequence relations, ⊧_{(V,K)}, where V is a valuation scheme and K is a class of models. He provides a cut-free sequent axiomatization of the consequence relation ⊧_{(V,K)}, where V is the strong Kleene scheme and M is the class of all fixed points. He proves the soundness and completeness of the axiomatization with respect to class of all fixed points. After this, he proves the admissibility of cut. I wanted to note a couple of things about this proof.

As Kremer reminds us, standard proofs of cut elimination (or admissibility) proceed by a double induction. The outer inductive hypothesis is on the complexity of the formula, and the inner inductive hypothesis is on how long in the proof the cut formula has been around. Kremer notes that this will not work for his logic, since the axioms for T, the truth predicate, reduce complexity. (I will follow the convention that 'A' is the quote name of the sentence A.) T'A' is atomic no matter how complex A is. ∃xTx is more complex than T'∃xTx' despite the latter being an instance of the former. Woe for the standard method of proving cut elimination.

Kremer's method of proving cut elimination is to use the completeness proof. He notes that cut is a sound rule, so by completeness it is admissible. Given the complexity of the completeness proof, I'm not sure this saves on work, per se.

Now that the background is in place, I can make my comments. First, he proves the admissibility of cut, rather than mix. The standard method proves the admissibility of mix, which is like cut on multiple formulas, since it would otherwise run into problems with the contraction rule. Of course, the technique Kremer used is equally applicable to mix, but the main reason we care about mix is because showing it admissible is sufficient for the admissibility of cut. Going the semantic route lets us ignore the structural rules, at least contraction.

Next, it seems significant that Kremer used the soundness and completeness results to prove cut admissible. He describes this as "a detour through semantics." He doesn't show that a proof theory alone will be unable to prove the result, just that the standard method won't do it. Is this an indication that proof theory cannot adequately deal with semantic concepts like truth? This makes it sound like something one might have expected from Goedel's incompleteness results. There are some differences though. Goedel's results are for systems much stronger than what Kremer is working with. Also, one doesn't get cut elimination for the sequent calculus formulation of arithmetic.

Lastly, the method of proving cut elimination seems somewhat at odds with the philosophical conclusions he wants to draw from his results. He cites his results in support of the view that the inferential role of logical vocabulary gives its meaning. This is because he uses the admissibility of cut to show that the truth for a three-valued language is conservative over the base language and is eliminable. These are the standard criteria for definitions, so the rules can be seen as defining truth. Usually proponents of a view like this stick solely to the proof theory for support. I'm not sure what to make of the fact that the Kripke constructions used in the models for the soundness and completeness, so cut elimination, results do not seem to fit into this picture neatly. That being said, it isn't entirely clear that appealing to the model theory as part of the groundwork for the argument that the inference rules define truth does cause problems. I don't have any argument that it does. It seems out of step with the general framework. I think Greg Restall has a paper on second-order logic's proof theory and model theory that might be helpful...

Posted by Shawn at 4:55 PM 0 comments Links to this post

Labels: inferentialism, logic

## Thursday, October 09, 2008

### Pitt-CMU conference update

I'm pleased to announce that Chris Pincock, currently at the Center for Philosophy of Science, has agreed to be the faculty speaker at the conference. Hartry Field is the keynote speaker. For more info see the website.

Posted by Shawn at 10:08 PM 0 comments Links to this post