Monday, March 31, 2008

Quine on ontology

From the depths of Word and Object comes this delightfully worded statement on ontology:
"What distinguishes between the ontological philosopher's concern and all this is only breadth of categories. Given physical objects in general, the natural scientist is the man to decide about wombats and unicorns. Given classes, or whatever other broad realm of objects the mathematician needs, it is for the mathematician to say whether in particular there are any even prime numbers of any cubic numbers that are sums of pairs of cubic numbers. On the other hand it is scrutiny of this uncritical acceptance of the realm of physical objects itself, or of classes, etc., that devolves upon ontology. Here is the task of making explicit what had been tacit, and precise what had been vague; of exposing and resolving paradoxes, smoothing kinks, lopping off vestigial growths, clearing ontological slums."
There is something in the closing lines that appeals to me. Hopefully I'll be able to post something more substantive on this stuff soon. (More promissory notes...)

A short note on Galois connections

I've been thinking about various topics in Dunn's book, one of them his idea of gaggles, which are generalizations of the idea of Galois connections. I just noticed a connection to this post in the early Dunn material. In the proof that every variety is equationally definable, the same sort of Galois connection is appealed to that is in the linked post at Logic Matters. First, a brief explanation of Galois connections.

A Galois connection is a pair of functions (f: A→B,g:B→A) on partially ordered sets (A, ≤) and (B, ≤') such that fa≤' b iff a≤ gb. Dunn finds these in a lot of places, most surprising to me were the ones in modal logic. I hope to work out a more detailed post on their application and interest soon.

To return to the proof mentioned above, for our partially ordered sets, we take classes of algebras ordered with ⊆ and classes of equations, also ordered with ⊆. For the functions, we take the operation e that maps a class of algebras to the class of equations valid in it and the operation a that maps a class of equations to the class of algebras that validate every member of the class. (a, e) form a Galois connection. This is important for the proof mentioned above. It is obvious that for a class of algebras K, K⊆ (Ke)a. To finish the proof, which I won't do here, one just needs to prove that (Ke)a⊆ K. This involves a fair amount of technical machinery and several lemmas, but it is pretty in the end. [Edit: There is also the same relations starting with a set Q of equations, i.e. Q⊆ (Qa)e.]

Why do I mention this? This is an illustration of the connection in Peter Smith's post. The models are the algebras and the equations are the axioms. Algebras and equations are rather restricted forms of models and sets of axioms, so this doesn't get the full generality indicated in his post. It's not clear how to get that generality based on this example, since the proof of the latter inclusion, (Ke)a⊆ K, depends on K being a variety, at least closed under homomorphic images. This is, at least, a fairly intuitive illustration of the connection between classes of models and of axioms. I am having trouble getting Lawvere's paper, so I don't know if there is more to his idea than this sketch. [Edit: It occurs to me that for the Galois connection we don't need the identity that we get in the case of varieties. The important relations are the ones mentioned above, K, K⊆ (Ke)a and Q⊆ (Qa)e. There might be a more general way of getting identity, but this should be enough for indicating the Galois connection between models and sets of axioms.]

Sunday, March 30, 2008

Variations on a title by Austen

I haven't been doing much in the way of natural language semantics or pragmatics lately. It is something that I hope to return to, especially after seeing my inbox this morning. I know some of my readers are actively working in those areas, so they are likely to appreciate this. I just got an email from Amazon saying that David Beaver's Sense and Sensitivity: How Focus Determines Meaning is coming out soon. It is a book on the semantics and pragmatics of focus. From the blurb it looks to have a good overview of the field of formal pragmatics. I took a class on semantics and pragmatics from David that was quite good. Near the end we covered focus some, including David's work, and if that material, or some version of it, is in the book, it is well worth a read.

Saturday, March 29, 2008

Field on paradoxes

Last Thursday Hartry Field gave a talk at CMU on logic. It was supposed to be on revising one's logic but it focused more on his view of truth and solutions to the semantic paradoxes. The bulk of the talk was, apparently, a summary of the first half of his book. Since it was hyper-condensed it was quite hard to follow. I want to make a couple of small comments on the talk.

Field's response to the paradoxes, focusing especially on Grelling's paradox and the liar sentence, seemed to be to suggest using a modified version of Lukasiewicz's continuum-valued logic in which every sentence is assigned a real value in the interval [0,1]. [Edit: The only designated value is 1.] The modification seemed to be the addition of an operator D for determinately true. The value of 'DA' is 0 if the value of 'A' is less than or equal to 1/2, and it increases linearly to 1 for values of 'A' greater than 1/2. As these operators are iterated, the interval in which the value of 'DnA' is 0 is expanded. It wasn't really touched on in the talk, but it seems like iterated D isn't equivalent to D. Something being determinately determinately true doesn't look to be equivalent to something being determinately true in this setting. If that is right, why are we interested in the iterated versions Dn or the sequence of iterated sentences A, DA, DDA, etc. ? One might think that taking all those together in something like Dω is the operator that is being aimed for. Field shot this down by noting that this operator doesn't behave correctly, calling things true that are clearly false and false things that are clearly true. One other small point wasn't touched upon. Why was that particular operator used? There are a lot of operators that do similar things and the cut off point for determinately false being 1/2 is pretty arbitrary, as is most any other point. Maybe it was just to illustrate a technical point, but that point was lost on me.

In the Q&A, Field cleared up where the excluded middle held, since his proposed solution to the paradoxes required rejecting unrestricted excluded middle. It turns out that excluded middle could be maintained for purely empirical sentences and all mathematical sentences. Excluded middle, then, doesn't hold for sentences in which the truth predicate is involved.

This may be an obvious thing, but Field provided a nice representation for his D operator. Since he was working in the interval [0,1], the D operator could be represented as a graph with the value of 'A' on the x-axis and the the value of 'DA' along the y-axis. It is a small thing but it will be useful in thinking about the relevant sections of Dunn's book. It is nicer to think about pictures, things sadly lacking in that book.

Washing the fur

I read Alexander George's "On Washing the Fur without Wetting It" today. The assessment of he gives of the analyticity debate is very appealing. He gives some arguments that the standard interpretation of the debate is incorrect since it makes out Quine's arguments to be too weak or Carnap to be too dense. I need to think about it some more before I can comment on the reconstruction, but I did want to comment on the moral that he draws. The big contribution of the paper is an explanation of how the different takes on analyticity change what is at stake in the debate. As George puts it:
"[F]or this distinction between kinds of truth is of a piece with one between kinds of difference, and so differences over anlyticity must affect how those very differences can be conceived. This is no doubt a source of the difficulty in obtaining a satisfactory perspective on the dispute...: for there appears to be no way even to judge what kind of dispute it is without thereby taking a side in it. To try to determine the nature of a disagreement over the nature of disagreements without taking any kind of position on that disagreement is just to try to wash the fur without wetting it."
The last sentence was included to explain the title. I don't think the last sentence is correct in general. George makes a case that it applies to the different stands on analyticity in particular, which is all that is needed. Read more

If one endorses the distinction like Carnap, the debate seems insubstantial since it appears to be a matter of framework external questions. If one denies the distinction like Quine, then it looks like there are substantial things at issue. George thinks this is so for Quine because once he rejects the distionction, "there is nowhere for any dispute to locate itself beyond the arena of factual disagreement." If one looks at the debate as a Carnapian, it will look like Quine isn't saying anything damaging against Carnap. If one looks at the debate as a Quinean, it will look like Carnap isn't offering a strong defense. This leaves the question of how the exposition was since, according to George, we can't approach this debate in a way that doesn't beg the question on one side or the other. He seems to do a good job of appearing neutral, which would undermine his point.
Regardless, his reading is an improvement over the traditional ones because it makes some sense out of why this debate is so hard to get a grip on.

The paper closes with a sort of aftermath for Quine. George argues that Quine's empiricism and linguacentrism, the name for Quine's view that one cannot escape a language and all systems of belief to pass judgment on disagreements ontological or otherwise. lead to a problem. Quine wants to maintain that theories can be incompatible and empirically equivalent but, in virtue of some more theoretical claims, one be true and the other false. This is dubbed "sectarianism." Quine at some points later in life considered a view on which such empirically equivalent theories could both make a claim to truth. This is dubbed "ecumenical." This position, George thinks, starts to look quite Carnapian. If two theories are empirically equivalent, then choosing one over the other is a pragmatic matter, hinging on no facts of the matter beyond the empirical on which they agree. The tension between (1) wanting to say one theory is true and the other false even though (2) there is no empirical evidence that could bear this out. (2) is supported by his empiricism, but I'm a bit confused about how linguacentrism is supposed to support (1). It seems like the rejection of the analytic/synthetic distinction is supposed to get the disagreement between theories into the realm of the empirical, in some sense, which realm would allow at most one to be true. Linguacentrism is supposed to be useful to Quine in responding to this, but I'm having trouble seeing it. George presses the tension, claiming that Quine is forced to be more like Carnap and adopt an ecumenical stance. In the end it is hard to see what Quine can end up maintaining that Carnap would disagree with.


If George is right up to this point, then his conclusion seems correct. He makes it sound like Quine didn't see a fundamental tension in his own views. There is something about the latter part of the article that seems like a starting point for a response. This is how linguacentrism is the source of the problem. I may just need to read this part again, but in reviewing the article it seems like the support for (1) isn't coming directly from that. Linguacentrism seems to be a side issue. This doesn't eliminate the problem but it might sharpen it for a response. Another possibility, just for fun, is that this is another indication that Quine should give up empiricism, as Davidson urged. Of course, this is a different reason than the one Davidson provided. (What was that article?) Quine, of course, would hate this reply, as he indicated in his response to Davidson.

Wednesday, March 26, 2008

Giving and taking

I just want to put up a couple of quotes from things I'm reading this term that have a certain affinity. This is, of course, not a novel idea One is from Sellars's EPM section 38:
"The idea that observation "strictly and properly so-called" is constituted by certain self-authenticating nonverbal episodes, the authority of which is transmitted to verbal and quasi-verbal performances when these performances are made "in conformity with the semantical rules of the language," is, of course, the heart of the Myth of the Given, For the given, in epistemological tradition, is what is taken by these self-authenticating episodes. These 'takings' are, so to speak, the unmoved movers of empirical knowledge, the 'knowings in presence' which are presupposed by all other knowledge, both the knowledge of general truths and the knowledge 'in absence' of other particular matters of fact. Such is the framework in which traditional empiricism makes its characteristic claim that the perceptually given is the foundation of empirical knowledge"
The other is from Kuhn's Structure of Scientific Revolutions, chapter X (p. 126 in the third edition):
"The operations and measurements that a scientist undertakes in the laboratory are not 'the given' of experience but rather 'the collected with difficulty.'"
I had forgotten how heavily Kuhn leans on perception in his book. (We are finishing up with Structure in the philosophy of science class I'm TAing for.) I have done no research on this topic, but I wonder if Kuhn had read EPM or was familiar with Sellars's work more generally. Kuhn doesn't cite Sellars anywhere in Structure even though there are places, especially chapter X, where it would seem to fit.

Monday, March 24, 2008

Varieties are the spice of logic

This is a retrospective look at Dunn's book. I'm trying to figure out what the point of the major theorems of the second chapter were apart from giving the reader some facility in a couple of important algebraic concepts. Read more

The second chapter of Dunn's algebraic logic book introduces several notions from universal algebra. The most important among these are: algebra (set with some operations under which it is closed), homomorphism (structure preserving function), and congruence relation (structure respecting equivalence relation). In the chapter he goes on to prove various theorems including what he describes as two big theorems from algebra. One of these is a theorem which says that every algebra is isomorphic to a subdirect product of prime algebras. I'm not sure how this is important for the development of the book. The other big theorem is about varieties, which are classes of algebras closed under direct products, subalgebras, and homomorphic images. It says that every variety is equationally definable and conversely. The converse is the easy direction. The hard direction takes several lemmas and the introduction of several new concepts. In subsequent chapters Dunn looks at logics whose operations are equationally definable, algebraized if you will. (Algebra-ized or alge-braised? Not sure.)

This sounds like it should be important for the development of the book. Indeed, some of these classes of algebras turn out to be fairly well known ones, like boolean algebras for classical logic. Right after this result further theorems are proved about soundness and completeness of a set of equations Q with respect to a word algebra W (algebra whose elements are "words" and whose operations combine the "words"). In Dunn's phrasing, Q is sound (satisfied by every algebra) in a class K of algebras with the same number of same arity operations as W, if the quotient algebra W/≡Q is free (the elements of the quotient are distinguishable in terms of the operations) in K. If
W/≡Q∈ K, then Q is complete with respect to K. This also sounds good.

Later in the book we are shown what some logics look like in equational form. Let's say, for concreteness, the implicational fragment of intuitionistic logic. We also know what that looks like in Hilbert-style axiom form as well as natural deduction and Gentzen-stye sequent calculus. The correspondence between Hilbert-style axioms, natural deduction, and sequent calculus is fairly well understood. Most proof theory books should go over translations between them. I'm not sure what a good answer is to the relation between the equational form and the axiomatic form. This is troubling me since this seems like it should be fairly clear. I think there is some answer in the Dunn book and I hope to return later this week with an answer. I'll just register the following point. When talking about the equational form, we are talking about also about a class of algebras. We have at our disposal notions like homomorphism and lots of algebraic machinery. There seems to be more structure there than in the sparse desert landscape that is a Hilbert-style axiomatic system. All the logic that we need to tease out the consequences of the equational form is that of equational logic, rules for symmetry, transitivity, substitution, and reflexivity.

Have we switched subjects between the equational characterization and the axiomatic characterization? The latter is naturally in proof theory but the former may be best understood as in semantics. The equational characterization is equivalent to a class of algebras and classes of algebras are good for semantics. This seems like it is on the right track. Dunn says that a logic is algebraizable if one can give an adequate equational characterization of an algebraic semantics. The proper relation is either not clear in Dunn or I am missing something important.

Sunday, March 23, 2008

A note on analyticity

This week in the Quine and Carnap class we're talking about the analyticity debate. I've read most of the Carnap pieces for it and wanted to write a short note on them. It is a rough note. One thing that surprised me in Carnap's response to Quine's "Carnap on Logical Truth" was how little weight he seems to place on analyticity. Carnap says that if there is a change of meaning of a term along the lines that Quine discusses, then the analytic truths change as well. They change because we have changed languages, from Ln to Ln+1. I had thought that there would be more stability in languages and analytic truths. Rather than switching the whole language and with it the analytic truths, one would just change part of the language, leaving the analytic truths as is.

I'm not sure if I think this is a good response. It trades a difference in meanings for a difference in languages. it makes it hard to see what the the distinction is between speaking a language in which the meanings change and switching between speaking different languages. This seems reasonable enough. I'm not sure what sort of pragmatic ground one could supply for opting for the one rather than the other. I had thought that analytic truth supported the former but Carnap seems to say no.

The relativization to a language prompted the question, legitimately or not: What is the difference between the predicates 'analytic sentence', 'true' and 'logically true'? In a way they are similar; they are relativized to a language. Truth doesn't have a lot of weight put on it by Quine. (I might be wrong here. I'm going to talk to someone about that tomorrow.) He mentions the use of it to generalize about linguistic items. Analytic sentences are a genus of the species of truth, as Quine says, as are logical truths. Logical truths are true in virtue of logical form though. Analytic truths are true in virtue of meaning, which surely means that they are true in virtue of the meanings in that structural configuration. Not all sentences with those meanings are true nor are all sentences with that structure true.

What extra do we signify when calling a sentence analytic? It isn't a greater commitment to its truth. That can be abandoned readily. Carnap says the analytic sentences aren't ones that must be held come what may. If there is recalcitrant experience we can always switch our language to a similar one in which certain sentences are no longer analytic. A change in analytic sentences is a change in meaning though, so it doesn't seem like much can be made of truth in virtue of those meanings; they are too fluid.

At this point I'm a little confused about what Carnap is maintaining in opposition to Quine. In "Carnap, Quine and Logical Truth," Isaacson gives an interpretation of the analyticity debate that puts little distance between Quine and Carnap's ultimate positions. When I read it, this seemed rather surprising. After reading Quine and Carnap's contributions, it seems pretty close to the truth.

Wednesday, March 19, 2008

The crowning glory of modern logic

Quoting Paul Halmos on Goedel's incompleteness theorems in their algebraic form:
"What has been said so far makes the Goedel Incompleteness theorem take the following form: not every Peano algebra is syntactically complete. In view of the algebraic characterization of syntactic completeness this can be rephrased thus: not every Peano algebra is a simple polyadic algebra. ... What follows is another rephrasing of this description... Consider one of the systems of axiomatic set theory that is commonly accepted as a foundation for all extant mathematics. There is no difficulty in constructing polyadic algebras with sufficiently rich structure to mirror that axiomatic system in all detail. Since set theory is, in particular, an adequate foundation for elementary arithmetic, each such algebra is a Peano algebra. The elements of such a Peano algebra correspond in a natural way to the propositions considered in mathematics; it is stretching a point, but not very far, to identify such an algebra with mathematics itself. Some of these 'mathematics' may turn out to possess no non-trivial proper ideals, i.e. to be syntactically complete [since ideals represent refutable propositions]; the Goedel theorem implies that some of them will certainly be syntactically incomplete. The conclusion is that the crowning glory of modern logic (algebraic or not) is the assertion: mathematics is not necessarily simple."
That was from his "Basic Concepts of Algebraic Logic," available on JSTOR. Two comments: (1) It'd be nice if more logic articles made me laugh. (2) I must figure out how to work a pun like that into some future article. (As is probably clear from those comments, I have a weak spot for puns.)

Thursday, March 13, 2008

A pithy note on Quine

I'm reading Word and Object in its entirety, something I've never done before. I tend to stop around the middle of chapter 2. I came across something I found surprising. On p. 76-77, Quine quotes Wittgenstein in the context of explaining the indeterminacy of translation: "Understanding a sentence means understanding a language." This was a little surprising since it is from the Blue and Brown Books. I thought they didn't have wide circulation. At least that is the impression I got from somewhere, possibly Monk's biography of Wittgenstein. Quine had referenced the Tractatus in some other essays, but I had chalked that up to the influence of the Vienna Circle and the Tractatus generally. Apparently Quine read him some Wittgenstein.

Sunday, March 09, 2008

Representation theorems

In the Dunn book on algebraic logic, there is a chapter on something called representation theorems. This was not something I'd come across before and it was not really explained. The first question is: what is a representation theorem? The answer is that it seems to be a canonical way of mapping the structure in question into a set-theoretic structure which contains operations on the set structure that correspond to the operations on the original structure.

The next question is: why are these interesting? I'm not really sure. The set-theoretic structures have the benefit of being extensional. That could be an epistemological benefit if the original structures aren't obviously extensional. Some of the structures are a little arcane though. For example, the representation of fusion, relevant implication, and the ternary accessibility relation are rather involved. I'm not going to include them; I don't have the reference handy. It isn't simplicity that is the aim of the mapping, per se.

The project of showing that all of math can be done in set theory was something that I thought was more of an early 20th century phenomenon. This seems to be reflected in the things cited. They are all before 1950. Nothing from the set theory is used to reveal anything in the original structures, so it isn't a case of translating from the original language, such as lattice theory, to set theory, finding something neat, then translating the result back into the original language. Some of the representations are technically quite nice but I'm unsure why one would be interested in them in the first place outside of the desire to show that the original structures can be found in the universe of sets.

Generally, Dunn's book is quite good, but it is kind of frustrating that some of the more heavy duty technical parts of the book, such as the representation chapter and subsequent representation theorems, are not motivated. More importantly, the idea and point of a representation theorem are not explained at all. They haven't gotten much clearer as the book goes on. [Edit: The comments clear things up a lot.]

Links on grad school stuff

I came across two pages of advice to grad students: Stearns's "Some Modest Advice for Graduate Students" and Huey's “Some acynical advice for graduate students” (pdf). Both of these are directed primarily at grad students in the sciences, both authors being zoology and ecology professors. The two pages are fairly divergent in their advice. The former is a bit more cynical (= modest, somehow), the latter less cynical. Most of the stuff seems applicable to philosophy grad school. There is some stuff in their about being treated like a colleague. Maybe I'm oblivious, but that doesn't seem to be a problem in the philosophy programs. Enjoy with an appropriate amount of salt.

Saturday, March 08, 2008

A note on Quine

In response to some conversations, class, and a few comments over at Greg's, I read most of Quine's Philosophy of Logic. It turns out that Quine is very much a truth-before-consequence philosopher. What is really surprising is how little the notion of consequence figures into Quine's book. The index on the edition I have doesn't even have an entry for consequence. I don't remember any discussion of consequence coming up during the course of reading. In the discussion of deviant logics, Quine only talks about different logical truths, nothing about differing consequence relations. Just from reading his book you'd get the idea that logical consequence wasn't much of a topic, let alone a central one to the idea of logic.

Truth and consequence

Over at Greg's blog, there was a post on which is prior, logical truth or consequence. They are, in many cases, interdefinable. For example, we might want something like: A |= B iff |= A→B. Of course, this will depend on having the appropriate expressive resources in the language. The left to right direction fails for any logic without a conditional, defined or primitive. An example of this would be the conjunction-disjunction fragment of classical propositional logic. This looks a little artificial though. A more natural example is the logical system of Aristotle's Prior Analytics. It has no conditional locutions, definable or primitive. This depends on using the interpretation found in Smith's introduction to the Prior Analytics, which i believe derives form Corcoran's work, rather than the axiomatic interpretation given by Lukasiewicz. There might be another counterexample coming from connectives defined using Kleene's strong matrix because the conditional defined on that (at least, the standard one) results in no tautologies. For example, p→p gets the value one-half when p is assigned the value one-half. I'm not sure about this because I'm not sure how consequence is defined for that system.

Does the schema fail in the right to left direction? I don't know of one and I'm doubtful there are any.

Why would one think that direction would always hold? The conditional is an object language expression that is supposed to capture the consequence relation. Sometimes the consequence relation can outstrip it. Things would be amiss if the object language outstripped the metalanguage. Here's an idea. There are fewer restrictions on the stuff that appears to the left of the turnstile, e.g. there can be infinitely many things on the left, they can be gathered using sets instead of conjunctions. If the right to left direction is going to fail, we'd need more restrictions on the stuff appearing on the left-hand side of the turnstile than the right. When put like that, it doesn't seem obvious that the right to left direction cannot fail, but the context would have to be very unusual if there is one. As a parting thought, it isn't clear that we could place restrictions like that on the left-hand side of the turnstile and still have something recognizable as a consequence relation, i.e. something satisfying Tarski's conditions.

Already presented thoughts on structure

Like I said, the Pitt-CMU conference has come and gone. I said before that if my comments on Ole's paper went over well, I'd put them up here. The comments seem to have gone well, so I'm going to put them up. The comments won't make much sense without having read the paper, which is on proof-theoretic harmony. Read more

1. First, a short summary of the paper.
Our initial problem was how to take inferential rules as conferring meaning while avoiding TONK and its ilk, and this was to be done through the notion of harmony. A promising candidate was the natural deduction GE-harmony strategy, but this cannot be the whole story semantically as it looks like one gets different meanings for the logical constants when there are different structural rules. This led us to a sequent calculus strategy, MIN, which takes meaning to be conferred by a proper subset of the rules. The version looked at distinguished the operational meaning, specified by the operational rules, from the global meaning, specified by the set of provable sequents. This runs into two problems: violating constraints set by the inferentialist project and demarcating structural rules. The conclusion is that unless MIN can be patched with a clarification of structural rules or GE-harmony made viable in some other way that gets around the problems with the structural rules then harmony can't be the complete semantic story for the inferentialist.

2. If we look back at the general thesis of inferentialism, it says that the meaning of logical constants is fully determined by the inferential rules governing their use. If we start by looking at natural deduction, then the first move, as Ole points out, will be to take the the intro- and elim-rules as being those rules. If we follow Dummett, Prawitz, Milne, and Read in this, then it is easy to miss the point that Ole brings to our attention.

If we then shift to sequent calculus, it is natural to look at the corresponding rules, the left and right intro-rules, or the operational rules. In sequent calculus new rules appear, namely the structural rules. These seem to go missing in the natural deduction setting. Really, they are there, but they are implicit, or more implicit, in the discharge policies than in the sequent calculus's structural rules. They are implicit in the sense that they do not even appear in a propositional form in natural deduction reasoning. Structural rules are more explicit, in the sense that they are at least an instance of a rule. They are not fully explicit since there is no single proposition or even sequent expressing the acceptance of the structural rule. With this in mind, we might want to push a point that Ole brings up at the end of his paper. He mentions that there are systems formulated in terms of hypersequents, systems where derivations are performed on finite multisets of sequents. These allow the formulation of more structural rules. Other systems, like display logic, bring out structure by including structural operators. These are ways of making structural rules explicit that would otherwise be left implicit in how the derivations are carried out. It is possible that in order to make progress on INF, a shift must be made to using these systems in order to get enough structural rules in view to make the necessary distinctions. Another possibility is that there is a need for a formalism that allows us to reason explicitly about structural rules. At the end of the paper, we're left with a negative conclusion, so I'm curious if Ole has any view about positive morals to draw.

3. Switching gears slightly, the problem with structural assumptions arises when we are dealing with rules that are hypothetical, or discharge assumptions. In standard approaches, this is usually limited to the vee elim-rule or the arrow intro-rule. However, in an effort to make the rules more uniform and general, the GE-approach makes all the rules hypothetical. In the sequent calculus systems, the structural rules are already present, so the problem arises there immediately.
In order to get around this problem, Paoli distinguishes two kinds of meaning for logical constants. There is the operational meaning, which is fully determined by the operational rules, and there are two kinds of global meaning, which are determined by the full set of rules for the system. This minimalism about the meaning picks out a subset of the rules as meaning conferring and claims that only those contribute to meaning while the other rules merely play a role in the logic.

There is then a question of whether one can distinguish the operational and structural rules in terms of use. As Ole points out the prospects of finding a difference in everyday reasoning practices that reflects things like structural rules or discharge policies seems slim. This brings up the question to what extent logic reflects, or should reflect, our practices of reasoning. If we are looking for a difference in use that appears in non-formalized contexts, the prospects are slim. If that is our goal, then there is a question of whether things like the intro and elim rules can viably be maintained if reflecting actual reasoning is our goal. Gilbert Harman thinks not. Others like Prawitz, I think, believe so. If, instead of a difference in everyday inferential practice, we are looking for a difference that comes out in the formalization, prospects are maybe a little better. Which of these was meant will determine how the MIN proponent will proceed. An INF proponent will not want too much space to open up between the two options.

4. The paper closes with a problem for MIN that leads to a general question. What is a structural rule? To put the question another way, what is the role in reasoning of the structural rules? The paper closes by looking at this in the case of intermediate logics. These logics are formulated in terms of hypersequents, so their operational rules are, strictly speaking, different from those of classical logic, which is formulated purely in terms of normal sequents. If one wants to defend MIN, one wants to chalk this difference up to structural assumptions, not a real semantic difference in operational rules. The worry is that any difference in derivational power can be assigned to structural assumptions in order to save sameness of operational meaning. I take it that the shift to derivational power is because we have sameness of operational meaning but difference in global meaning between the logics. It is a little unclear to me where exactly the problem lies. The worry might be that our idea of structural rules might be shaky because formulations of operational rules may build in structural assumptions. (Like some sequent formulations of classical logic.) Or, it might be that different formalisms cover different sorts of structure, e.g. sequents reveal some, hypersequents reveal more, and we can't tell in advance what is hidden. Or are there just no sharp definitions? (Rules that do not contain any logical constants essentially, perhaps?)

[The next two paragraphs are rougher ideas which weren't presented] To put the question another way, what is the role in reasoning of the structural rules? Here's an idea, taken from Belnap, sort of. One might think that the structural rules provide the context of deducibility. Of course, the question about meaning in different contexts of deducibility still comes up and this may be what Paoli's distinction between operational and global meaning trades on. But, if we are willing to adjust the context suitably, there are even non-trivial systems that have TONK as a connective. Harmony might be a notion that depends on the context of deducibility in a bigger way than expected. When there are fewer structural rules in play, it would be easier for rules to be harmonious.

One question that comes up is how far one can push the MIN move of attributing differences in derivational power to structural rules. Could we, for example, maintain that all conditionals are, really, operationally, at base some minimal conditional, with just that minimal meaning, which is obscured by the semantically insignificant structural assumptions? Suppose I am a fan of the Lambek calculus with its left and right conditionals, which collapse into a single one in the presence of some structural rules. Does it make sense to maintain that all of those classical conditionals are really either the left or the right Lambek conditional, just obscured by structure?

5. To close, a question: The thing that causes problems for the harmony as reduction account is a violation of the side conditions on modal formulae. Is there a connection between side conditions and the discharge policies or structural assumptions? Or, is this possibly a different problem, one that introduces a new sort of restraint that may depend on non-inferential properties.

Wednesday, March 05, 2008

Another free book

I seem to post lots of links to free books. i will continue this trend. Keeping with this semester's theme of algebraic logic (more posts on this coming!) I came across Burris and Sankappanavar's A Course in Universal Algebra, which is a graduate algebra book. This is more algebra than logic I suppose. I glanced at it but I don't know how accessible it is without background. It has a section on connections to model theory and what looks like a lot of exercises dealing with lattices. Lattices are neat.

Also, there is a free copy Diestel's Graph Theory online. Graphs are neat.

Actually, John Baez has links to several online math and physics texts on this page. Some of them seem somewhat removed from my current interests, but it is nice to know there they are out there.

Monday, March 03, 2008

An idea for a class

Recent events have made me think that it would be neat to take (or someday teach, perhaps?) a class on contemporary empiricism. "Contemporary" might be slightly gerrymandered since I'm not exactly sure of some of the publication dates. But, what I was thinking was post-Quine, post-Vienna Circle, post-Sellars attempts at empiricism. The things I had in mind were: van Fraassen's constructive empiricism in the Scientific Image and his later stuff like the Empirical Stance, McDowell's minimal empiricism in his Mind and World and whatever the appropriate essays are (his exchange with Brandom on perception maybe?), and Gupta's recent stuff in Empiricism and Experience. I'm not sure what else would go into it. Possibly some stuff at the start about the challenges to empiricism, such as "Two Dogmas" and EPM. That might be a bit much. What other stuff would count as contemporary empiricism, broadly construed, such as things that focus on the content of experience and how experience shapes knowledge. Surely there are other philosophers who would count. Maybe even some that don't have some sort of Pitt association. [Edit: It appears that Jesse Prinz claims some empiricist sympathies. I'm not sure how much he fits, never having read his stuff. If he works, then he also fits the bill of a non-Pitt person.]

Conference recap

The Pitt/CMU grad conference was on Saturday. I like to think it went well despite the weather turning awful the day before. I was worried that some of the speakers were not going to make, but they all arrived without fuss. The conference turn out was pretty good. The student talks were all solid. I got to meet Ole and Errol. I hope they both found the trip worthwhile. Both of their talks went well. I commented on Ole's talk. I was pleased with how it went. I'm still developing some of the ideas from that, so there might be more here on that topic. I might put my comments online here. They would only be coherent to people that have read Ole's paper though. Ole's presentation was quite good. I want to say that was the most people I've seen at a philosophy of logic talk, but I think the audience was slightly larger when Dag Prawitz gave a two lecture series at Stanford a few years ago. Errol's talk was good. Read more


Other treats from the conference were Gordon Belot's paper on geometric possibility, an idea for developing relationalism about space. I'm teaching relationalism to my undergrads, so it was cool to see a more "high tech" version of the idea be spelled out. One of the speakers presented a paper criticizing van Fraassen's epistemology. Since van Fraassen was the keynote, he was given the opportunity to reply to the paper. He did an excellent job laying out his reasons for moving away from traditional epistemology and bringing out the nature of the disagreement. I want to write something on that topic, but I need to look at some of his post-Scientific Image writings first. The response helped shed light on how his project and philosophy of science were working. There was a paper on social science methodology which provoked some good discussion from some of the philosophers of science in the crowd.

Van Fraassen's talk was titled "Representation." A lot of it was going over why representation isn't resemblance. He was hesitant to draw any large scale conclusions in the talk since he thinks representation is a family resemblance or cluster concept. Nonetheless, he did give us one thesis, namely that representation makes sense only when one considers the way in which the concrete thing is used as, taken as, or made a representation of something. (This isn't quite the way he formulated it, but it is pretty close. The important thing is that he wants to make representation depend on instances of using or taking something as a representation.) He closed by considering how representations are used in scientific contexts, from interpreting bubble chamber pictures to building models, computerized or mathematical, of empirical phenomena. All in all it was very interesting. I'm now very curious to look into some of his more recent work. The questions afterwards were good and they were asked, I believe, entirely by grad students. The answers were illuminating and helped clear up some things about his view that I was stuck on, particularly how he was understanding models and their relation to phenomena.

On a more personal note, at the party the night before van Fraassen told some stories about Pittsburgh "back in the day," or back in the early 60s. It was neat to hear some stories about Sellars in his heyday, the buzz about the manifest image paper, what the seminars were like. He also told some stories about all the logical stuff going on here, driven by Anderson and Belnap. It was delightful.

Now I don't have to worry about organizing anything until next year. The keynote speaker for the next Pitt/CMU conference is Hartry Field. That should be fun.