Sunday, May 27, 2007

No little red jackets

This is going to be a short post with more than one idea, which is to say a little scattered. It is also a little rough around the edges, although fun for me. One of my favorite webcomics had a recent strip whose last panel really stuck out. The text reads, "Built just like regular sentences, these sentences can in an instant transform forever the life of their speaker!" with a caption at the bottom saying "National Sentence Council". The context is one of the characters thinking about proposing to another character, but that is neither here nor there. The important point is the claim that one cannot tell just from the form of a sentence what it's consequences will be. There are a few qualifications here, since, arguably, one can tell the logical consequences from form and such. There is also a sense in which using any sentence will change one's life forever, e.g. you've just committed yourself to something or generated some implicature or simply just used another sentence again. This sense seems largely uninteresting and will be ignored. If one knew what "marry" meant, then one could tell that any sentences containing that form would likely be important. However, this requires semantic (meaning) knowledge, not just knowledge about the shape or syntax of the sentence. Adopting a phrase from one of my seminar teachers from the fall, the important sentences don't come wearing red jackets to indicate that they are important. The mundane sentences and the important ones are built the same way and don't seem to be determined solely by their form.

On a somewhat related note, one of the things that Nick Asher, rightfully, hammered me on at the Austin conference was that the syntax of a sentence is not sufficient condition for what the use of the sentence is. Something that looks like a declarative sentence can easily be used as a question, or even a command, and similarly for the others. Things that look like questions can combine with things that seem like they only take traditionally declarative sentences, e.g. "if we get more serious, should I tell him my name?" I didn't get a chance to ask him about it, but it reminded me of an argument Davidson made that there was no conventional determination of the force of a sentence, although convention could contribute to the meaning. (I think that was how the idea went... It is from "Moods and Performances" in Inquiries into Truth and Interpretation. ) Although this is somewhat different from the point in the first paragraph, they seem to go together. If Asher and Davidson are right (although I'm willing to admit they are making different but similar points) then the assertional sentences don't come wearing little read assertion jackets.

What is the point of this? There are two. The first I'm interested in here is to what degree syntax (in a broad sense) can be said to determine the meaning and pragmatic effect of a sentence in use. The examples above seem to indicate that there isn't a lot of determination. The second is my support of the use of webcomics to illustrate philosophical ideas or otherwise liven up blogs.

Summer plans

This is somewhat late compared to my last end of term post. The spring term for Pitt is over. Long over in fact. Except for the paper on the Tractatus that I am slowly finishing, but I hope to have that done by mid-June. Summer has started. I'm going to be in Pittsburgh for most of the summer taking an intensive German class. I am doing a reading group with a few other grad students. It is a very Pittsburgh reading group. We are reading both Brandom's Articulating Reasons and McDowell's Mind and World, starting with the former. We're dong chapter three this week. I hope to have a few posts based on the readings, starting with something on harmony once I get my copy of the book back. Next weekend is the formal epistemology workshop at CMU. I will be attending at least a few of the sessions. Apart from that, I am hoping to read some more logic stuff (topics: inference, algebraic logic, computability, relevance logic), maybe some philosophy of language stuff (topics: no clue...), and write up a few posts I've had on the backburner about Prawitz, Marconi, and some more things I learned at the UT Austin conference (so educational!). That's the plan. I think it might be a bit much.

Thursday, May 24, 2007

Hypergaming again

A while ago I wrote a post on hypergame, a paradox that was pointed out to me by a visiting prospective student. He was nice enough to post the reference to an article by the inventor of the paradox: Playing Games with Games:
The Hypergame Paradox
William S. Zwicker
The American Mathematical Monthly, Vol. 94, No. 6. (Jun. - Jul., 1987), pp. 507-514
My intuition was that the problem came down to the halting problem but I wasn't sure how to get a correspondence between games and Turing machines. It turns out that Zwicker does basically that. He suggests for any game, you associate a program that allows the players to input their moves and plays out exactly as the game does. This isn't much more formal than the idea in my earlier post. It does lead quickly to the source of the paradox: it assumes the existence of a Turing machine that can solve the halting problem. He also gave another way of looking at it. Identify each game with its game tree of possible moves. Finite games are then trees that well-founded. The question becomes: is the tree consisting of the trees of all well-founded trees itself well-founded? This, apparently, connects up with existing research in set theory. There is also a little discussion of the nature of mathematical paradox in the article, including a quote from an article Quine wrote on paradoxes for Scientific American. I had never seen that article before, so that was kind of cool. Zwicker's article is a fun read as well as insightful.

Wednesday, May 23, 2007

Topology and verification

Recently one of the older grad students, Kohei Kishida, explained his thesis project to me. He is writing on topological semantics for modal logic. I finally got a grasp on why topologies are good at or useful for representing verification. This came up because Kevin Kelly discusses it in his work connecting computability and the problem of induction. Neat stuff.

The definition of a topology T on a set X is: T is a collection of subsets of X, and (1) the empty set and X are in T, (2) T is closed under arbitrary unions, and (3) T is closed under finite intersection. The pair of X and T is called a topological space. It is really the third condition that makes topologies useful for representing verification. For example, suppose you have verifications (finite proofs or observations) of some formula for all natural numbers. Do you then have a proof of the universally quantified statement? Not if you plan on stringing all the proofs together, taking their conjunction. This will result in an infinitely long proof, but proofs are finite. To see the point in a different way, imagine propositions (in an appropriate topological space) as sets of worlds. Taking the conjunction of propositions is taking their union. Taking the conjunction of infinitely many propositions doesn't guarantee that the conjunction will be in the topology since topologies are not closed under infinite intersection, just finite.

What Kohei pointed out was the following. Tarski and McKenzie proved an equivalence (I'm not sure what the appropriate description is exactly… interpretation? embedding?) between topologies with an interior operation int (a boolean algebra of some sort...) and S4 modal logic. E.g. int(A)\subseteq A is []\phi->\phi, int(A)\subseteq int(int(A)) []\phi->[][]\phi, X\subseteq int(X) is TRUE->[]TRUE, int(A)\cap int(B) \subseteq int(A \cap B) is []\phi & []\psi -> [](\phi & \psi), and so on for the rule of necessitation. But, there is an embedding of intuitionistic logic, which is fairly agreed upon to be good for representing finite proofs/observations in S4, and vice versa. One can give a topological semantics for S4 which models verification. One interesting thing is the relation between the topological semantics for S4 and the Kripke tree semantics for intuitionistic logic. Since we can translate between S4 and intuitionistic logic, we can just look at the structure of the semantics. It turns out that Kripke trees for intuitionistic logic translated to S4 are a certain kind of topological space, a Hausdorff space (I think that was the name), which are closed under arbitrary intersections. (I will double check this.) This means that topological semantics is more general than Kripke semantics. What is the import of this? I've unfortunately forgotten most of what Kohei told me, but he has done a lot of work with Steve Awodey generalizing this stuff to first-order logic. The exciting thing for me was the conceptual connection between verification and topology, which Kohei's work made really clear.

Another post in which something is linked...

The lovely Stanford Encyclopedia of Philosophy has a new article up that is relevant to philosophers of language and semanticists. It is the discourse representation theory entry by David Beaver and Bart Geurts. It is a nice overview of what is good about DRT.

I promise to get back to posts with more content later this week.

Sunday, May 20, 2007

Philosophy Talk

Philosophy Talk got mentioned in the San Francisco Weekly. In fact, it got a Best of SF 2007 award for best local radio program. I'm glad to see the show is still going strong, with the latest episode on artificial intelligence, with guest Marvin Minsky.

Saturday, May 19, 2007

Internet humor + logic = ????

One answer to the titular question is xkcd, which is linked to often enough by fellow travellers. But, just in case people read my blog but not Richard Zach's LogBlog, I would like to direct people to another answer to the titular question, this delight post.

Thursday, May 17, 2007

The internets work in mysterious ways

This post is entirely unphilosophical. Sometimes I am amazed at who does not have an internet presence. Some people just do not exist in any contentful way on the internet.

Recently I got interested in the work of Georg Kreisel. I googled him to get some background and an idea of what to read. Not that much shows up. The best thing is this biography of him. He seems not to have written any books. There aren't any compilations of his work. There's just this book, Kreiseliana, which is about him. There are no bibliogrpahies of his work online and for some reason Wikipedia doesn't even have a page for him. The internets have failed me! Apart from his general "unwinding" program, the neatest biographical thing that turns up for him is that Wittgenstein, who he had as an undergrad teacher, said he was "the most able philosopher he had ever met who was also a mathematician."

Monotonicity, for lack of a better title

A few weeks ago I was out at Stanford. While I was there, Johan van Benthem gave a talk about natural logic to some computational linguists. His talk was called "A Brief History of Natural Logic". The talk was good, as always from van Benthem, but the computational linguists seemed a little unimpressed. I think they were hoping that natural logic and generalized quantifiers would be able to help with some issues in textual entailment. It doesn't really look like it will. As a completely ad hominem aside, I find textual entailment to be really boring as an area of inquiry.

One of the interesting little asides that van Benthem gave was about monotonicity. Semantic monotonicity was defined as follows. If some formula \phi(P) is true, and in the model the set assigned to P is a subset of that assigned to Q, then \phi(Q), e.g. from that all cats are mammals, infer that all cats are vertebrates. There are a few quantifiers that exhibit very regular behavior and satisfy, in addition to monotonicity in its arguments, two additional properties, conservativity (taken from the handout, for quantifiers Q and predicates A,B,C: Q AB iff Q A(A\cap B) ) and variety (if A isn't empty, and Q AB for some B, then there is some C such that \lnot Q AC ). These quantifiers are exactly: all, some, no, not all. Neat historical fact: they make up the square of opposition. The neat fact from the talk, which is something I want to move from back to front burner some day is that almost all (maybe all, I forget what exactly van Benthem said) logics that are studied exhibit monotonicity for quantifiers. This is a property that ignores order of logic, applying as much to first-order logic as to higher-order logics. But, since it cuts across these different orders and systems, there isn't a nice way to express it in a general way that holds for the different systems. It is independent of the order of logic being used. Additionally, the monotonicity inferences were known to the medieval logicians (and apparently to ancient Chinese logicians of the Mohist school, according to Liu and Zhang 2007). They were interested in these sorts of inference issues, but they were obscured in the move to the more standard view of logic divided into orders, according to quantifiers. (As another aside, Fred Sommers's work on syllogistic logic was recommended during the talk as a defense of the more traditional approaches to logic that were more concerned with this sort of thing. One point for N.N.) The take away points of the talk seemed to be these. There there was another approach to studying logic that got obscured in modern approaches. That approach got obscured in large part because of the way languages are created and studied now. The other approach catches some important generalizations that get lost on the modern view. And finally, monotonicity inferences are really important. This last one prompted me to be more sensitive to them when I am reading through Articulating Reasons this summer, since I'm curious to what extent they are exemplify Brandom's material inferences.

Saturday, May 12, 2007

A quick link

There is a neat little post over at The Space of Reasons (a name that just warms my heart) on a problem with literally interpreting the Bible. You ask: Aren't there a lot of problems with literal interpretation? Yes, but this post does a nice, neat job of arguing that there is a big problem with it. There's also good epistemology stuff over there, if that is your thing.

Friday, May 11, 2007

A speculative post on Wittgenstein

This one is a somewhat speculative post, as indicated by the title which doubles as a warning. One of the ideas in the Tractatus is that propositions that presuppose their own truth (or falsity) are nonsense. For example, the proposition that x is an object is an illicit proposition, because either a non-object term goes in for x, in which case it is non-sense, or an object term goes in for x, in which case it is true. Elementary propositions must be capable of truth and falsity, but this particular proposition isn't capable of both, only one. (The speculative part begins now.) I wonder if this is a point on which Wittgenstein's thought remained constant throughout his philosophical life. Late Wittgenstein is associated with the idea that a rule or concept can be normative for one only if it is possible to violate the norm of that rule or concept. This is one of the ideas in the private language considerations. Representation is a partially normative concept. Something can be represented well or poorly, accurately or inaccurately. Propositions in the Tractatus are supposed to represent the way the world is, through the picturing relation. The elementary propositions must be capable of both representing and misrepresenting the world, that is truly or falsely representing the world. But, certain apparent propositions, ones that use formal concepts, e.g. x is an object, cannot misrepresent the world. Instead of misrepresenting, they don't represent at all due in part to grammatical misfire. I wonder if Wittgenstein is employing, perhaps only implicitly, his idea that normative concepts must admit the possibility of violation. Some apparently elementary propositions are such that they cannot misrepresent reality, so they cannot represent it either.

I'm not sure how this idea would fit into either a reading of the Tractatus or Wittgenstein's philosophy as a whole. This is in part because I'm still not at home either in the Tractatus or in Wittgenstein's philosophy as a whole. But, if there is something to the idea that there are many points of continuity between early and late Wittgenstein, as defended by the so called New Wittgensteinians, then there might be some further support for this idea in Wittgenstein's other writings.

The moving parts of modal logic

Here's another post on something that Karen Bennett said, which will hopefully be more substantive than the last one. In one of the discussions, Bennett said that Lewis's semantics for modal logic runs into trouble when talking about the "moving parts of his system". This means, roughly, when it starts talking about matters metalinguistic, like the domain of worlds, truth, domains of individuals, etc. Her particular example was: necessarily there are many worlds. Apparently Lewis's semantics mess this up. I don't know the details since I don't know what sort of language Lewis was using and the particular semantics. That's not really the point. It got me wondering how prevalent this sort of thing is. When do semantics go astray for things that are naturally situated in the metalangauge? The truth predicate can cause some problems. Does the reference predicate? I'm not really sure I've read anything about logics that include an object language reference predicate.

The idea dovetailed nicely with some stuff I was working on for Nuel Belnap's modal logic class. Aldo Bressan's quantified modal logic includes a construction of terms that represent worlds. This construction works at least as long as the number of worlds is countable. I'm not sure what happens when there are uncountably many worlds since that requires me to get much clearer on a few other things. This allows the construction of nominal-like things and quantification over them. The language lets you say that for any arbitrary number of worlds, necessarily there are at least that many worlds. The neat point is that the semantics works out correctly. [The following might need to be changed because I'm worried that I glossed over something important in the handwaving. I think the basic point is still correct.] For example, (finessing and handwaving some details for the post) the sentence "necessarily there are at least 39 worlds" will be true iff for all worlds, the sentence "there are at least 39 worlds" is true, which will be just in case there are at least 39 worlds. This is because you can prove that there is a one-one correspondence between worlds and the terms that represent worlds. Bressan uses S5. The way "worlds" is defined uses a diamond, so the specific world at which we are evaluating does not really matter. So, the semantics gets it right. Of course, this doesn't say that there are only models with at least 39 worlds. If that is what we want to say, we can't. It does say that within a model, it's true at all worlds that there are at least 39 worlds, which is all the object language necessity says. This gives us a way to talk about a bit of the metalanguage in the object language, albeit in a slightly restricted way.

Friday, May 04, 2007

Occammed

During Karen Bennett's talk at the UT Austin conference, she said something that I think is worth repeating, again and again. It isn't even that big a point. In discussing presentism (or actualism, I forget which she was discussing at the time) she said something to defuse/dodge an objection from Occam's razor. I think it was in the context of using objects as proxies for the tensed-objects/possibilia that one loses if one is a presentist/actualist. She said, roughly, that Occam's razor mandates that one avoid multiplying entities beyond necessity, not that one must make do with less. Occam's razor is applicable once you've determined what things are necessary and what things you're attempting to smuggle in. Before those are settled, the possibility of doing without one or another entity, e.g. by using some sort of proxy, shouldn't force one to abandon the stuff that has proxies. Small point, but it seems like it is worth keeping in mind. Some of the discussions I've had about Grice's arguments against exclusive "or" in natural language seem to come down to the other person saying, look we can explain away exclusive "or" using the inclusive one, so we don't need the exclusive one by Occam's razor. But, this assumes that we've already determined what meanings are necessary and can begin shaving off the excess.

Thursday, May 03, 2007

Post-conference post

As I mentioned previously, the UT Austin conference was delightful. All the talks were heavily attended by both grads and profs. I got some good comments on my paper from Achille Varzi and Nick Ascher. Varzi was fantastic. I'm going to have to look into his work in the philosophy of logic and language. I got to meet Sam and Aidan. One thing that surprised me was the concentration of metaphysics papers. I did not realize they liked metaphysics so much down in Austin. As Sam said, if you are interested in going to a conference next year, Austin is a great one to submit to. The conference gave me a few ideas that I'm going to try to write up in posts. A combination of tiredness, seeing friends, and working on my outstanding Wittgenstein paper has kept me from writing. [The rest of this paragraph added after the rest of the post.] The only things I would have changed about the conference were both on my end. I was getting over a bad cold, which made the socializing somewhat difficult and did no good for my presentation. The other thing would be to get my presentation down to within the time constraints. A combination of the end of the term and procrastination led me to read my paper (bad idea) which was already a little long (another bad idea). This culminated in cutting chunks and rushing through other bits. Alas. Next time I hope to be a bit better on the presentation.

Complete aside: I read Peter Galison's Einstein's Clocks, Poincare's Maps last week. As a narrative in the history of science it was good. There were several points in the book where Galison abandoned philosophical and historical reserve to write something over the top. Ignoring those, it did a fairly convincing job of giving some historical context to the problem of simultaneity, how the technological background naturally supported the idea of operationalizing it, and how this fed into Einstein's theory of relativity and Poincare's writings on time. I had read about the problem of standardizing time in Meiji and post-Meiji Japan, but I had never read about it in a European/American setting. It turns out that colonial and shipping concerns put much more pressure on standardizing time for Europeans and Americans than in Japan where it, from what I remember, was pushed along mainly by the rail industry and its commercial interests.