Wednesday, February 27, 2008

Quote from Barwise

In reading the intro to Barwise's Admissible Sets and Structures, I came across the following paragraph that I liked.
"A logical presentation of a reasonably advanced part of mathematics (which this book attempts to be) bears little relation to the historical development of that subject. This is particularly true of the theory of admissible sets with its complicated and rather sensitive history. On the other hand, a student is handicapped if he has no idea of the forces that figured in the development of his subject. Since the history of admissible sets is impossible to present here, we compromise by discussing how some of the older material fits into the current theory."
I confess that I know nothing of the details of the history of admissible set theory. I have enough handicaps in that general area. I like Barwise's sentiment, that knowing about how the area developed helps one get a grip on the topic. Actually, I'll trot out the opening to Structure of Scientific Revolutions, an opening which I forgot about till we read it in intro to philosophy of science.
"History, if viewed as a repository for more than anecdote or chronology, could produce a decisive transformation in the image of science by which we are now possessed. That image has previously been drawn, even by scientists themselves, mainly from the study of finished scientific achievements as these are recorded in the classics and, more recently, in the textbooks from which each new scientific generation learns to practice its trade. Inevitably, however, the aim of such books is persuasive and pedagogic; a concept of science drawn from them is no more likely to fit the enterprise that produced them than an image of a national culture drawn from a tourist brochure or a language text."
Rhetorically, that is an impressive opening to the book. It is easier to get fully behind Barwise's more modest sentiment though. Of course, I find these when I'm doing practically no historical work.

Monday, February 25, 2008

Continuing a chain

Apropos of this, I will rise to the challenge, and contribute to the proliferation of an internet meme. My three sentences are:
"Add the mussels, cook for 30 seconds, and then cover the skillet. Reduce the heat to medium-low and cook until all the mussels open, 3 to 4 minutes. (Discard any that do not open.)"
What could be more representative of the contents of this blog? The book nearest to hand was Suvir Saran's American Masala, a cookbook that has lots of dishes that are fusions of Indian and American ones. Glancing around, it seems like the next nearest books were volume 3 of Sandman and Garson's Modal Logic for Philosophers. The latter may have been more representative of this blog, I suppose.

I was tagged by Justin. The instructions are:
1. Grab the nearest book (that is at least 123 pages long).
2. Open to p. 123.
3. Go down to the 5th sentence.
4. Type in the following 3 sentences.
5. Tag five people.

I'll pass this on to bloggers I've met or soon will meet. So, that'll be Aidan, Nate, Sam, Ole, and Kenny.

Saturday, February 23, 2008

Philosophy of logic

Apart from Etchemendy's Concept of Logical Consequence and Read's Thinking about Logic, what are good books on the philosophy of logic? I am not sure where to look for stuff to orient my self in the broader issues.
[Edit: Sol Feferman has the syllabus from his philosophy of logic class available on his website. Its focus is the demarcation problem.]

Happy birthday to me

I started this blog two years ago today. It is already two years and 303 posts old, not counting this one. Crazy. Time for a small retrospective. Since starting this I have gotten in to grad school, graduated from undergrad, moved across the country to Pittsburgh, started the program at Pitt, and bought an espresso machine. My interests have slightly shifted away from philosophy of language to philosophy of logic and of science. The Wittgenstein thing and the logic thing are still going strong. I seem to have picked up an interest in Kant along the way. When people ask what my interests are I still say my interests are philosophy of language and logic. I just glare when people ask what my dissertation will be on.

Deleting my name and repeats, the top 20 keyword searches that lead people here are:
1. brandom dummett logic
4. medieval language
5. words and other things
6. famous fallacies
7. meaning of oops
8. the world is everything that is the case
9. words of congratulations
11. explicit norms
12. two-dimensional semantics
13. we are sorry to inform you
14. "meaning is use"
17. philosophical speech
18. sellars cultured guy
19. kantian concept
20. latex blogger
They are delightful. I think I see a pattern.

Hopefully I will get in a more substantive post in the next few days.

Thursday, February 21, 2008

Wittgenstein and Carnap

In Logical Syntax, Carnap says that he has shown that one can talk about the logical form of language and that this is a counterexample to Wittgenstein's dictum that you cannot talk about logical form, as it can only be shown. There is something that seems odd about this. From the little bit of secondary literature I've read, no one really seems to say much about this, although some of Carnap's contemporaries seem to embrace Carnap's claims. It seems like Carnap is talking past Wittgenstein. The problem with fleshing out this claim is that I have to flesh out one of the difficult doctrines in the Tractatus. (As opposed to the simple ones, I guess.) I'm going to attempt to sketch an answer.Read more

Carnap says that we can talk about the logical form of a language within the language. He uses Goedel's arithmetization of syntax as the justification for this claim. This technically works fine. It allows Carnap to talk about the syntax of a language within the language by talking about the numbers that represent the sentences, their formations, and their derivations. This is an entirely language internal feature.

Before getting to Wittgenstein, I need to write a disclaimer. I'm not sure at the moment how the resolute interpretation of this stuff would go, so I'm going to stick with a metaphysical reading of this. (This is compounded by not having the texts handy to flip through. I expect that Kremer's recent paper on the cardinal problem of philosophy and Goldfarb's on the saying/showing distinction will be relevant.) It shouldn't really matter as I think Carnap gets things wrong from both the resolute and traditional readings of the Tractatus.

Wittgenstein's view of logical form, at least before throwing away the ladder, is, roughly, that logical form is what language shares with reality. The passages that say this are likely what Carnap has in mind, and likely how he and others in the Vienna Circle understood it. Both language and reality, propositions and facts, share a form, their logical form. It is in virtue of this that there is a connection between language and the world. Carnap gets things wrong because trying to say what a sentence's logical form is would be to describe a language external connection rather than a language internal feature, the geometry of finite, serial orders of symbols, to use Carnap's phrase. There is then the further question of why Wittgenstein thinks that you cannot say, only show, what the logical form of a proposition is. I don't think I can attempt that without spending some time with the Tractatus, which I do not have handy. In any case, Carnap has talked right past Wittgenstein. Carnap is talking about forms of signs in combination. This is something that I think Wittgenstein would have no problem agreeing with. You can describe the order or arrangement of concrete signs with your signs, but that doesn't get at the logical form of the symbol, which is the sign in meaningful use.

Monday, February 18, 2008

Semantics situated

There is a review of Situating Semantics up at Notre Dame Philosophical Reviews. The main critical part looks to be a discussion of various criticisms and defenses of Perry's notion of unarticulated constituent. The author doesn't say much about the non-language-oriented essays in the volume though. It ends with a cute anecdote too.

Sunday, February 17, 2008

More on correspondence

Dunn finally gave an indication of the sense of correspondence he has been using. He says that the sense of correspondence is the same as the correspondence between axioms of modal logic and frame conditions, namely accessibility relation conditions. He cites van Benthem in this regard as originating the term "correspondence." As was said before, the conditions on º correspond to various logical systems for implication. This is because there is a tight connection between º and implications, as mentioned before. There clearly needs to be a bridge here as we have logical systems where want something like frames. The bridge is an accessibility relation of sorts. Dunn uses a ternary relation R, which acts like an accessibility relation, to define fusion and various implication operations. We are concerned with frames, just frames of the basic form (U,R), with U a set of points and R a ternary relation on them.

Given a basic frame as above, we can give a canonical definition of R for →.For a,b,c, subsets of U, Rabc iff ∀ x,y, if x∈ a and x→y ∈ b, then y∈ c. I think this is the entry point for semantics for relevance logic. From the tiny bit I know about it, I think it is Rabc iff a+b=c, for a binary operation +.

To return to the first paragraph, there is a theorem about what frame properties are first-order definable. It is called the Sahlqvist theorem. It is in terms of the standard modal language with the standard connectives, box, top and bottom. It syntactically picks out a class of conditional sentences, whose general form is somewhat arcane, and shows that those give rise to frame conditions through a translation into first-order sentences. (My description seems to be fairly inadequate so please look at the Wikipedia page. I should do a post on the Sahlqvist theorem by itself...) The Sahlqvist theorem gives a sufficient, but not necessary, condition for a modal formula to pick out a first-order definable class of frames. A question that popped up while reading through the Dunn book was what sorts of ternary relations were first-order definable. A related question is are there any forms of equations that pick out those frames, in the same way that Sahlqvist formulae do for first-order definable modal frames. An answer is not forthcoming in Dunn. I'm also not really sure what the relation is between first-order logic and the algebraic logic stuff in Dunn's book. It doesn't seem like we've used anything beyond first order logic lately, but I'm not sure if that is an accident of the chapters or a feature of algebraic logic. There isn't an overriding concern to explain which structures and which of their properties are first-order definable. From the opening chapters, it didn't seem like algebraic logic was in any essential way restricted to first-order logic or less. Maybe the later chapters will shed light on this.

Sunday, February 10, 2008

Carnap and Gentzen

In his article "Present State of Research into the Foundations of Mathematics," Gentzen briefly talks about Goedel's incompleteness results. He says that it is not an alarming result because it says "that for number theory no adequate system of forms of inference can be specified once and for all, but that, on the contrary, new theorems can always be found whose proof requires new forms of inference." This is interesting because Gentzen worked with Hilbert on his proof-theoretic projects and created two of the three main proof-theoretic frameworks, natural deduction and the sequent calculus. The incompleteness theorems are often taken as stating a sort of limit on proof-theoretic means. (I'm treading on shaky ground here, so correct me if my story goes astray.) That is to say, any sufficiently strong proof system will be unable to prove certain consequences of its axioms and rules. Adding more rules in an attempt to fix it can result in being able to prove some of the old unprovable statements, but new ones (maybe just more?) statements will arise. Read more


Gentzen's reaction to this is to shrug his philosophical shoulders. There may be consequences that we can't capture using our current forms of inference, but that just means we need new forms of inference. To these we will need to add further forms of inference. And so on.

I thought this was, by itself, an interesting position whose foundational worth I haven't figured out. But, in reading Logical Syntax of Language, I found some things that surprised me. First, Carnap read Gentzen. He cites the consistency of arithmetic article. I had thought that Gentzen was mostly unknown or ignored in Europe sort of like Frege. This doesn't dispel that possibility though. The other thing was that Carnap has a similar reaction to Goedel's incompleteness theorems. At least, he has a similar reaction to their consequences for his work. He says, "[E]verything mathematical can be formalized, but mathematics cannot be exhausted by one system; it requires an infinite series of ever richer languages." (LSL 60d)

The parallel with Gentzen should be clear. The reaction seems to come from a common philosophical position with respect to mathematics. I'd like to call that an inferentialist one, although I'm a little hesitant. I'm unclear what Carnap's views about inference are exactly, but it seems like it may be apt. It will depend on whether the inference forms are part of the richer language for Carnap. If that turns out not to work, the common thread might be the syntactic orientation of the two logicians. I think they have different philosophical views about syntax though, so that might not be entirely happy either. I'm not familiar enough with Gentzen to say how Carnap's view of syntax's role in science would strike him. In Carnap there doesn't seem to be the talk of defining the logical constants syntactically or inferentially in the way that is alluded to in Gentzen's papers. Still, it seems like the right sort of response from a certain orientation, one I want to call inferentialist. What Goedel's results show is that the inferentialist can't be satisfied with a static or final system for mathematics. An interesting question would be whether the same sort of "dynamic" would carry over to the non-mathematical. I have no clue if there is a lesson for inferentialist semantics generally lying in there.

A separate question is whether the above view is satisfactory as a foundational position. One reason to think that it isn't is that there seems to be some slack between an inferential system at any one time and the consequences of the axioms. The latter is always outrunning the former which tries to play catch-up. One might want their foundational position to explain what is going on with the latter, or at least the slack between the two. Although, that might stack the deck in favor of classical logic against something like intuitionism or constructivist positions.

Thursday, February 07, 2008

Congratulations

A few weeks ago we decided on the papers to accept for the Pitt-CMU conference, but I just now got the list of authors. Congratulations are in order for the authors. The presentations this year will be:
Errol Lord's "On Maximal Rationality"
Lefteris Farmakis's "On Van Fraassen's 'New Epistemology' and its 'Solution' to Conceptual Relativism"
Preston Stovall's "The Normative Dimensions of Kant's Account of Representation"
Ole Hjortland's "Proof-Theoretic Harmony and Structural Assumptions"
Ben Almassi's "Relativism as Science Studies Methodology".
I'll be commenting on Ole's paper. If it goes well I'll post my comments online. Should be fun.

Wednesday, February 06, 2008

Famous fleas

I am reading Sellars's lectures "The Structure of Knowledge" for McDowell's class. At one point, there is an incredibly bizarre line:
"Why might not individuals have parts, and these again and so on ad infinitum, as do the famous fleas which have other fleas to bite 'em."
This just looked too weird, so I rushed to Google. Wikipedia has the answer. It turns out that this is actually a reference to two things. One is a poem by Jonathan Swift which parodies some old views in biology. The other is a poem by De Morgan, which parodies the Swift one. The latter is reproduced here from the Wikipedia article.
"Great fleas have little fleas upon their backs to bite 'em,
And little fleas have lesser fleas, and so ad infinitum.
And the great fleas themselves, in turn, have greater fleas to go on,
While these again have greater still, and greater still, and so on."
That Sellars must have been a very cultured guy.

Monday, February 04, 2008

Fusion and structure

An apparently important operation in algebraic logic is that of fusion, º. This is a binary operation which satisfies the following, where ≤ is a partial order representing implication:
aºb≤c iff b≤ a→c, and
aºb≤c iff a≤ c←b. Read more
Fusion is then instrumental in connecting the operational form of implication, → and ←, with the relational form, ≤. Fusion is glossed as a premiss combining operation. There needs to be something to combine premises since ≤ is a binary relation and often we want to talk about more than one premiss entailing others. From the definition, it appears that º can only be introduced on the left of ≤. There isn't any indication of what a º would amount to or how it would relate to → and ← on the left of ≤, which can of course happen, as in a→b≤a→b. Granted, the section in the book dealing with fusion is entirely in terms of implication, so it makes a modicum of sense to leave it at that. The question just jumps out what the relation is between ∧ and º when º is on the right of ≤. Do we get identity or does it depend of further principles?

Dunn notes that in languages with just implication and fusion, adding conditions to fusion yields algebras corresponding to different implication logics (as I mentioned before). I haven't cleared up what the sense of correspondence here is, but I have some other things to say. If Dunn is right, then conditions on fusion have a tight connection (correspondence?) to structural rules in Gentzen systems and discharge functions in natural deduction systems. If fusion can't do anything apart from associate, not even commute, then we get Lambek calculus. (I've never fiddled with a Gentzen version of the Lambek calculus, so I'm on uneasy footing there.) If stipulate fusion to have the lower bound property (aºb≤a) and commutation, then we get intuitionistic implication. No indication is given in the book what we need to add to it to get classical implication. This is odd, since it is a natural progression, going from the ueber-weak Lambek implication, to relevance implication, to intuitionistic to classical. But, it goes missing.

One idea why is that ≤, being a binary relation, only allows one element on its right. If we have a Gentzen formulation of intuitionistic logic and lift the restriction on the number of propositions in the succedent, then we get classical logic, a seemingly magical fact. I'm not sure how to develop this idea. The presentation of classical implication uses binary ≤, so that doesn't seem to be the sticking point. Another idea is that fusion can't be the classical way of combining premises. Classical logic is truth-functional, so extensional. Fusion is intensional in the sense that might discriminate equivalent elements that appear in different places in the fused element. In order to get fusion to act extensionally, it would need to behave like ∧. Could we have two distinct operations, ∧ and ∧', that are both extensional and behave like 'and'? No. Making º that much like ∧ would make it ∧.

There seems to be a bit of a difference between the proof theoretic and algebraic ways of looking at things. In proof theory, more specifically in a Gentzen formulation, we need to have multiple succedents to have classical logic (or do away with the ⇒ entirely). Merely being free in premiss manipulation and combination, what the comma does (or taking the antecedent and succedent to be multisets), only gets us to intuitionistic logic. Strictly speaking in the algebraic formulation, we are only using one premiss, the fusion (or conjunction) of the elements. Perhaps a similar case could be made for the Gentzen formulation, that only a single premiss is being used, namely the multiset (or sequence) of the premises. I feel like there is still a difference in the amount of structure being attributed to the respective sides, proof theoretic and algebraic, though. The question, then, is what is being suppressed on the algebraic side?

I had hoped to come up with something more to say about the connection between conditions on fusion and structural rules, but I don't think I have anything worthwhile to say at the moment. I have a vague idea that the conditions on fusion might make the structural rules explicit. The idea behind this is that the conditions are single statements in the language. The structural rules are inference rules, rules being different than propositions. Might fusion make structure explicit? Maybe?

Saturday, February 02, 2008

Metalanguages

This post probably just indicates that I am missing some key bits of knowledge. Why are all metalanguage classical (at least the ones I've seen)? Are there not any non-classical ones? Are there reasons for not using a non-classical metalanguage or there not being any, apart from making things more complicated? Surely there are discussions of these things somewhere...

On a different note, googling {{"non-classical metalanguage"}} yields 5 results. After this post it will probably yield 6.

Making a virtue of a weakness

It is fairly common knowledge that classical logic is stronger, in a sense, than intuitionistic logic. There are things that you can prove in classical logic that you can't prove in intuitionistic logic, in a sense. I say "in a sense" because there are those translations from the former to the latter so that for everything you can prove in classical logic, something is provable in intuitionistic logic that the classical logician would consider equivalent but the intuitionistic one wouldn't. It is also fairly well known that intuitionistic logic distinguishes φ, ∼ φ, and ∼∼φ, when φ isn't itself a negated sentence, and certainly for all atomic sentences. Classical logic only distinguishes φ and ∼φ. Read more

Switching gears slightly, K and S4 are both weaker logics than S5. In S5, all strings of modal operators collapse down to the one on the far right, e..g [][]◊ p becomes ◊p. This means that in S5 there are only two modalities: [] and ◊ (ignoring details about definition for a second). In S4, there is no difference between strings of a single operator and the single operator, e.g. [][][][] and []. They are equivalent. If we look at, say, K, there are more modalities. In K there is a difference between [] and &\loz;[], for example. There is even a difference between [] and [][] for non-theorems. While there are only two modalities (possibly one if you define them right) that are primitive in any of the standard modal logics, this doesn't mean that there are only two modalities full stop. As indicated above, in S4 there are more than in S5 and even more in K.

There are a couple of upshots of this. One is that in a weaker logic we have more propositions, or contents, considered nonequivalent, or distinct by the logic's lights. This lets us see the fine structure of various operators and logical constants. Whereas classical logic wants to run roughshod over those details, intuitionistic (or relevance or minimal or...) logic forces us to pay attention to them because their derivational utility is lessened.

Another upshot, pointed out recently to me by a few people, among them my roommate and my model theory teacher, is that if fewer propositions are considered equivalent, then a space opens up between the propositions. Let's take the intuitionistic case since the details are the most studied and I'm more familiar with it. A space opens up between p and ∼∼ p, for atomic p. This can be exploited by adding principles to fill in the gap so to speak. The really interesting thing is that these principles need not be consistent with the classical equivalence of the propositions. To put it a different way, the principles can extend the system, or logic, in a way that is classically inconsistent, but perfectly consistent in the weaker logic. This has, apparently, been carried out a fair amount in constructive analysis, which is, roughly, real analysis done in intuitionistic logic with extra principles for dealing with infinite sequences. (This last may not be right. I can't find a good description of choice sequences online.) An example of something provable in constructive analysis that is inconsistent with classical analysis is Brouwer's theorem, that every function on a closed interval [a,b] is uniformly continuous. (The example is supplied by Feferman, who has a paper on the relation between classical and constructive analysis on his web page.)

The point here is that weakening the logic doesn't just mean that some things aren't provable. Rather, the interesting thing is that new things become provable. These new things that are inconsistent with a stronger logic. This need not be restricted to just math and logic though, although it might be easiest to see in those areas. If weakening the logic creates this sort of conceptual space, it seems like it opens the door for divergences in other areas of philosophy, e.g. semantics and the philosophy of language. It seems like this should open up some venues for investigation.

Friday, February 01, 2008

A trifling observation

Carnap quotes the Tractatus a few times in his Logical Syntax. He says he was heavily influenced by it. In discussing statements of the laws of science he says:
"See Wittgenstein on this point: 'All propositions such as the law of causation, the law of continuity in nature,... are a priori intuitions of the possible forms of the propositions of science.' (Instead of 'a priori intuitions of' we would prefer to say: 'conventions concerning'.)"
"Conventions concerning" certainly isn't a paraphrase of "a priori intuitions", as which it seems Carnap wants to present it. I'm not sure if he read that into TLP or if he just decided to insert his own view there.

Sellars and Carnap

In Brandom's Locke Lectures, he quotes an odd line from Sellars: "the language of modality is a transposed language of norms." Part of the point of the lectures is to give some clearer content to that claim. Brandom does it with his analysis of pragmatic metavocabularies and such. I had thought that this idea of a transposed language was original to Sellars. He is a fairly colorful writer, which sometimes obscuring his points.

In reading Carnap's Logical Syntax, I came across something similar. At the start of section 80, "The Dangers of the Material Mode of Speech," Carnap says that the material mode is "a special kind of transposed mode of speech." This is glossed as "one in which, in order to assert something about an object a, something corresponding is asserted about an object b which stands in a certain relation to object a." He goes on to say that metaphor is a transposed mode of speech. Looking back at the Sellars quote, this seems to be the sort of thing he meant. I don't know if an idea of transposition was common in the first half of the 20th century, but I've not come across it elsewhere.

This lead me to a question. To what extent were Sellars's ideas a reaction to or an offshoot of Carnap's? It is clear that Sellars read a lot of Carnap. His article on the role of rules of language (an article whose title I'm blanking on) [Edit: "Inference and Meaning", thanks Rick] directly deals with Carnap. Sellars made a contribution to Carnap's Schillp volume. According the SEP article on Sellars, Sellars was deeply influenced by Carnap, focusing mainly on their engagement with science and epistemology. It doesn't seem to mention views on language, although the article on rules [Edit: "Inference and Meaning"] definitely lays out stuff on material inference and inferentialism, as it is retroactively called.

Why would this matter? An apparently hot topic is the influence of Carnap on Quine. Carnap also had a big influence on Sellars, whose views on a great many things are different than Quine's. Looking at what they were reacting to in Carnap and what their ultimate reactions were could probably indicate further territory to explore. Especially since Sellars was fairly heavily influenced by Wittgenstein and Kant in ways that Quine was not. To strain a metaphor to the breaking point, it would be like triangulating the shadows of giants. While on their shoulders. In any case, it is starting to seem like a non-terrible idea to look into this. There is some material on the influence of Carnap on Quine, and possibly some on Carnap and Sellars. I don't know if there is much on Sellars and Quine since they didn't seem to engage each other much. Although, books on roughly inferentialist ideas, like Brandom's Making It Explicit and Peregrin's Meaning and Structure discuss them together, though not Carnap to my memory. It seems like it could be fruitful to bring all three into the picture together.