Monday, June 16, 2008

Thoughts on Inventing Temperature

I just read Inventing Temperature by Chang. It is, as may be expected from the title, a book on the history of temperature, focusing on the development of thermometry. Every chapter is divided into two parts: historical narrative and philosophical analysis. There are elements of each in both parts of the chapters though. I am going to comment on a few themes from the book. Read more

One is the epistemic problem of setting up a scale on which to measure temperature. This requires fixed points to calibrate against. Knowing that a certain phenomenon always happens at a certain temperature would require knowing what temperature that phenomenon happens at. This requires having a calibrated thermometer already to hand since exact temperature is not an observable phenomenon and outside a fairly narrow range it isn't observable with any sort of even rough accuracy. The response to this circularity that Chang finds in the historical narrative is a process of iterative improvement. First some substances are found that roughly agree and we can calibrate according to our senses. Based on this more precise devices can be constructed, still using ordinal comparisons. If things go well, new devices can be constructed on the basis of those with a numeric scale that has a physical meaning. Chang is hesitant to follow Peirce in taking this iterative development to be linked to truth although he notes the similarity.

I want to note about this is an affinity with some of Mark Wilson's views. In particular, Wilson's suggestion to view agents as measuring devices themselves. They don't have nice numeric scales associated with the various physical properties that they are responding to, but the measuring capabilities are enough to cope with the world. Chang's suggestion goes along with this and ties it in a clear way to the development of science: our rough measuring capacities are sufficient to start the iterative development of measuring devices as needed by various scientific enterprises. (This sounds somewhat commonsensical.) It is an aspect that seems to go missing somewhat in discussions of perception.

In Wandering Significance, Wilson says that he thinks Gupta and Belnap's revision theory of truth and theory of circular definitions can be used to explain various episodes in the history of science. He doesn't provide examples, which is a pity since my historical ignorance left me wondering what he had in mind and how that story would go. Chang's picture of iterative development provides a clear example. The concept of temperature used, the operative core of it, is clearly circular. Starting with some initial hypotheses about values based on our perceptual capacities, an extension is roughly determined which forms the new basis for further revisions. Repeat. There are some rough edges to this though. This revision process is clearly not taken to the transfinite, or even that far into the finite. The range of starting hypotheses and values is fairly constrained, so the extension, if any, that is constant under all initial hypotheses will not be determined. Despite these incongruities, the theory of circular definitions looks to be applicable. If this is a paradigm case, then it would vindicate Wilson's claim since similar incidents of setting up a system of measuring devices, measurement operations and theoretical concepts arise often enough in the recent history of science, I would expect. Even if the particular sort of system involving measuring devices does not, Chang indicates that systems of circular concepts arise and are developed through something like the iterative process that he sketches. If this is right, then it seems like one should pay more attention to circular concepts and conceptual development than has been done lately.

4 comments:

nogre said...

I got myself into some of the same issues while working on relativity. To measure a location, distance or velocity you need to have a fixed point of reference from which to measure. Of course, your fixed point may not actually be fixed, ruining your measurement. Say you're attempting to measure the location of some star. If you measure it's location relative to another star you will probably be fine for most any purpose, but if you measure it from the light on a weather satellite, you'll have problems.

So you can only measure location and velocity insofar as you can find a fixed reference point and you can only determine a fixed reference point if you know it's velocity.

What actually happens is that we use agreed upon fixed points to define location, e.g. the cathedral of learning or the ridiculously nice botanical gardens you have there in Pittsburgh, as references to find other things. These objects are stationary relative to their immediate surroundings (and the people witnessing/ observing them), though they are not stable relative to, say, the moon.

Now to define a unit of distance we could break up the distance between the cathedral of learning and the botanical gardens entrance into some standard unit. This method is how they originally defined the meter, as I am sure you know, as a 10,000,000th of 1/4th of Earth's circumference. The meter has apparently undergone many circular iterations as you have described, eventually resulting in the modern definition as the distance light travels in some small fraction of time.

I'm not too worried about the number of iterations (transfinite or otherwise) that you worry about. We try to make the measurement systems based upon fundamental constants as much as possible. More iterations means measuring more accurately in terms of fundamental constants.

This is an issue for mass: we still use a hunk of metal as the standard kilogram - there is no easy way to use fundamental constants to measure mass. And it is known that the ingot of metal varies in mass by teeny amounts, but these amounts are far smaller than can causes problem for just about anyone.

I figure a major change will occur when we have to incorporate the Uncertainty Principle into our standards, but we aren't there yet: the meter is defined in terms of seconds and light. Seconds are defined in terms of number of vibrations of a cesium atom. If, for instance, we need to use something that vibrates even more quickly than the cesium atom for more accurate results, then we'll need to use something subatomic. Then there might be uncertainty issues.

Shawn said...

nogre,
That is quite interesting. A brief reply to a small point near the end of your comment: The thing about iterations was just a technical point of divergence between the iterative process Chang highlights and the revision theory of Gupta and Belnap, from what I understood of it. I'll try to respond to the rest of the comment soon.

Shawn said...

nogre,
I'm glad to hear that problems with fixed points in measurement occur in other areas. I don't know much about the general area of measurement, but it sounds like versions of the problem crop up all over the place. Chang's book has gotten me thinking about the interaction between operationalism about concepts and the fixed point problems. There could be something interesting in there. As you point out, some of the variation doesn't matter, as in mass, since said variation is too small to affect, in Wilson's phrase, the practical go of the enterprise. It seems like the variation could possibly lead to problems on some views of reference, although this isn't an idea that I've thought out at all.

Bryan said...

Thanks for pointing out the interesting connection between Chang and Gupta, Shawn.

I think it would be interesting and tricky to pursue it by trying to make Chang's "epistemic iteration" precise. One essential question would seem to be, what standard of 'stability' would be chosen for the revision sequences?

As you correctly point out, we are normally only dealing with a handful of 'stages' in Chang's epistemic iteration. (Chang gives 4 in the case of thermometers.) And even if there were more, it isn't clear at what point we should be willing to call such iterations 'stable' in a revision-theory sense, and have it still mean something in the history of science.

This question strikes me as difficult, and tied up in our lack of a robust theory of inductive support. On the other hand, maybe a revision-theory will add something new to the problem of induction.