Tag Archives: Percy Williams Bridgman

A meditation on physical units: Part 2

[Preface: This is the second part of my discussion of this paper by Craig Holt. It has a few more equations than usual, so strap a seat-belt onto your brain and get ready!]

“Alright brain. You don’t like me, and I don’t like you, but let’s get through this thing and then I can continue killing you with beer.”    — Homer Simpson

Imagine a whale. We like to say that the whale is big. What does that mean? Well, if we measure the length of the whale, say by comparing it to a meter-stick, we will count up a very large number of meters. However, this only tells us that the whale is big in comparison to a meter-stick. It doesn’t seem to tell us anything about the intrinsic, absolute length of the whale. But what is the meaning of `intrinsic, absolute’ length?

Imagine the whale is floating in space in an empty universe. There are no planets, people, fish or meter-sticks to compare the whale to. Maybe we could say that the whale has the property of length, even though we have no way of actually measuring its length. That’s what `absolute’ length means. We can imagine that it has some actual number, independently of any standard for comparison like a meter-stick.

"Not again!"
“Oh no, not again!”

In Craig’s Holt’s paper, this distinction — between measured and absolute properties — is very important. All absolute quantities have primes (also called apostrophes), so the absolute length of a whale would be written as whale-length’ and the absolute length of a meter-stick is written meter’. The length of the whale that we measure, in meters, can be written as the ratio whale-length’ / meter’ . This ratio is something we can directly measure, so it doesn’t need a prime, we can just call it whale-length: it is the number of meter sticks that equal a whale-length. It is clear that if we were to change all of the absolute lengths in the universe by the same factor, then the absolute properties whale-length’ and meter’ would both change, but the measurable property of whale-length would not change.

Ok, so, you’re probably thinking that it is weird to talk about absolute quantities if we can’t directly measure them — but who says that you can’t directly measure absolute quantities? I only gave you one example where, as it turned out, we couldn’t measure the absolute length. But one example is not a general proof. When you go around saying things like “absolute quantities are meaningless and therefore changes in absolute quantities can’t be detected”, you are making a pretty big assumption. This assumption has a name, it is called Bridgman’s Principle (see the last blog post).

Bridgman’s Principle is the reason why at school they teach you to balance the units on both sides of an equation. For example, `speed’ is measured in units of length per time (no, not milligrams — this isn’t Breaking Bad). If we imagine that light has some intrinsic absolute speed c’, then to measure it we would need to have (for example) some reference length L’ and some reference time duration T’ and then see how many lengths of L’ the light travels in time T’. We would write this equation as:

eq1

where C is the speed that we actually measure. Bridgman’s Principle says that a measured quantity like C cannot tell us the absolute speed of light c’, it only tells us what the value of c’ is compared to the values of our measuring apparatus, L’ and T’ (for example, in meters per second). If there were some way that we could directly measure the absolute value of c’ without comparing it to a measuring rod and a clock, then we could just write c’ = C without needing to specify the units of C. So, without Bridgman’s Principle, all of Dimensional Analysis basically becomes pointless.

So why should Bridgman’s Principle be true in general? Scientists are usually lazy and just assume it is true because it works in so many cases (this is called “proof by induction”). After all, it is hard to find a way of measuring the absolute length of something, without referring to some other reference object like a meter-stick. But being a good scientist is all about being really tight-assed, so we want to know if Bridgman’s Principle can be proven to be watertight.

A neat example of a watertight principle is the Second Law of Thermodynamics. This Law was also originally an inductive principle (it seemed to be true in pretty much all thermodynamic experiments) but then Boltzmann came along with his famous H-Theorem and proved that it has to be true if matter is made up of atomic particles. This is called a constructive justification of the principle [1].

The H Theorem makes it nice and easy to judge whether some crackpot’s idea for a perpetual motion machine will actually run forever. You can just ask them: “Is your machine made out of atoms?” And if the answer is `yes’ (which it probably is), then you can point out that the H-Theorem proves that machines made up of atoms must obey the Second Law, end of story.

Coming up with a constructive proof, like the H-Theorem, is pretty hard. In the case of Bridgman’s Principle, there are just too many different things to account for. Objects can have numerous properties, like mass, charge, density, and so on; also there are many ways to measure each property. It is hard to imagine how we could cover all of these different cases with just a single theorem about atoms. Without the H-Theorem, we would have to look over the design of every perpetual motion machine, to find out where the design is flawed. We could call this method “proof by elimination of counterexamples”. This is exactly the procedure that Craig uses to lend support to Bridgman’s Principle in his paper.

To get a flavor for how he does it, recall our measurement of the speed of light from equation (1). Notice that the measured speed C does not have to be the same as the absolute speed c’. In fact we can rewrite the equation as:

eq2

and this makes it clear that the number C that we measure is not itself an absolute quantity, but rather is a comparison between the absolute speed of light c’ and the absolute distance L’ per time T’. What would happen if we changed all of the absolute lengths in the universe? Would this change the value of the measured speed of light C? At first glance, you might think that it would, as long as the other absolute quantities on the left hand side of equation (2) are independent of length. But if that were true, then we would be able to measure changes in absolute length by observing changes in the measurable speed of light C, and this would contradict Bridgman’s Principle!

To get around this, Craig points out that the length L’ and time T’ are not fundamental properties of things, but are actually reducible to the atomic properties of physical rods and clocks that we use to make measurements. Therefore, we should express L’ and T’ in terms of the more fundamental properties of matter, such as the masses of elementary particles and the coupling constants of forces inside the rods and clocks. In particular, he argues that the absolute length of any physical rod is equal to some number times the “Bohr radius” of a typical atom inside the rod. This radius is in turn proportional to:

eq3

where h’, c’ are the absolute values of Planck’s constant and the speed of light, respectively, and m’e is the absolute electron mass. Similarly, the time duration measured by an atomic clock is proportional to:

eq4

As a result, both the absolute length L’ and time T’ actually depend on the absolute constants c’, h’ and the electron mass m’e. Substituting these into the expression for the measured speed of light, we get:

eq5

where X,Y are some proportionality constants. So, the factors of c’ cancel and we are left with C=X/Y. The numbers X and Y depend on how we construct our rods and clocks — for instance, they depend on how many atoms are inside the rod, and what kind of atom we use inside our atomic clock. In fact, the definition of a `meter’ and a `second’ are specially chosen so as to make this ratio exactly C=299,792,458 [2].

Now that we have included the fact that our measuring rods and clocks are made out of matter, we see that in fact the left hand side of equation (5) is independent of any absolute quantities. Therefore changing the absolute length, time, mass, speed etc. cannot have any effect on the measured speed of light C, and Bridgman’s principle is safe — at least in this example.

(Some readers might wonder why making a clock heavier should also make it run faster, as seems to be suggested by equation (4). It is important to remember that the usual kinds of clocks we use, like wristwatches, are quite complicated things containing trillions of atoms. To calculate how the behaviour of all these atoms would change the ticking of the overall clock mechanism would be, to put it lightly, a giant pain in the ass. That’s why Craig only considers very simple devices like atomic clocks, whose behaviour is well understood at the atomic level [3].)

image credit: xetobyte – A Break in Reality

Another simple model of a clock is the light clock: a beam of light bouncing between two mirrors separated by a fixed distance L’. Since light has no mass, you might think that the frequency of such a clock should not change if we were to increase all absolute masses in the universe. But we saw in equation (4) that the frequency of an atomic clock is proportional to the electron mass, and so it would increase. It then seems like we could measure this increase in atomic clock frequency by comparing it to a light clock, whose frequency does not change — and then we would know that the absolute masses had changed. Is this another threat to Bridgman’s Principle?

The catch is that, as Craig points out, the length L’ between the mirrors of the light clock is determined by a measuring rod, and the rod’s length is inversely proportional to the electron mass as we saw in equation (1). So if we magically increase all the absolute masses, we would also cause the absolute length L’ to get smaller, which means the light-clock frequency would increase. In fact, it would increase by exactly the same amount as the atomic clock frequency, so comparing them would not show us any difference! Bridgman’s Principle is saved again.

Let’s do one more example, this time a little bit more extreme. According to Einstein’s theory of general relativity, every lump of mass has a Schwarzschild radius, which is the radius of a sphere such that if you crammed all of the mass into this sphere, it would turn into a black hole. Given some absolute amount of mass M’, its Schwarzschild radius is given by the equation:

eq6

where c’ is the absolute speed of light from before, and G’ is the absolute gravitational constant, which determines how strong the gravitational force is. Now, glancing at the equation, you might think that if we keep increasing all of the absolute masses in the universe, planets will start turning into black holes. For instance, the radius of Earth is about 6370 km. This is the Schwarzschild radius for a mass of about a million times Earth’s mass. So if we magically increased all absolute masses by a factor of a million, shouldn’t Earth collapse into a black hole? Then, moments before we all die horribly, we would at least know that the absolute mass has changed, and Bridgman’s Principle was wrong.

Of course, that is only true if changing the absolute mass doesn’t affect the other absolute quantities in equation (6). But as we now know, increasing the absolute mass will cause our measuring rods to shrink, and our clocks to run faster. So the question is, if we scale the masses by some factor X, do all the X‘s cancel out in equation (6)?

Well, since our absolute lengths have to shrink, the Schwarzschild radius should shrink, so if we multiply M’ by X, then we should divide the radius R’ by X. This doesn’t balance! Hold on though — we haven’t dealt with the constants c’ and G’ yet. What happens to them? In the case of c’, we have c’ = C L’ / T’. Since L’ and T’ both decrease by a factor of X (lengths and time intervals get shorter) there is no overall effect on the absolute speed of light c’.

How do we measure the quantity G’? Well, G’ tells us how much two masses (measured relative to a reference mass m’) will accelerate towards each other due to their gravitational attraction. Newton’s law of gravitation says:

eq7

where N is some number that we can measure, and it depends on how big the two masses are compared to the reference mass m’, how large the distance between them is compared to the reference length L’, and so forth. If we measure the acceleration a’ using the same reference length and time L’,T’, then we can write:

eq8

where the A is just the measured acceleration in these units. Putting this all together, we can re-arrange equation (7) to get:

eq9

and we can define G = (A/N) as the actually measured gravitational constant in the chosen units. From equation (9), we see that increasing M’ by a factor of X, and hence dividing each instance of L’ and T’ by X, implies that the absolute constant G’ will actually change: it will be divided by a factor of X2.

What is the physics behind all this math? It goes something like this: suppose we are measuring the attraction between two masses separated by some distance. If we increase the masses, then our measuring rods shrink and our clocks get faster. This means that when we measure the accelerations, the objects seem to accelerate faster than before. This is what we expect, because two masses should become more attractive (at the same distance) when they become more massive. However, the absolute distance between the masses also has to shrink. The net effect is that, after increasing all the absolute masses, we find that the masses are producing the exact same attractive force as before, only at a closer distance. This means the absolute attraction at the original distance is weaker — so G’ has become weaker after the absolute masses in the universe have been increased (notice, however, that the actually measured value G does not change).

Diagram of a Cavendish experiment for measuring gravity.

Returning now to equation (6), and multiplying M’ by X, dividing R’ by X and dividing G’ by X2, we find that all the extra factors cancel out. We conclude that increasing all the absolute masses in the universe by a factor of a million will not, in fact, cause Earth to turn into a black hole, because the effect is balanced out by the contingent changes in the absolute lengths and times of our measuring instruments. Whew!

Craig’s paper is long and very thorough. He compares a whole zoo of physical clocks, including electric clocks, light-clocks, freely falling inertial clocks, different kinds of atomic clocks and even gravitational clocks made from two orbiting planets. Not only does he generalize his claim to Newtonian mechanics, he covers general relativity as well, and the Dirac equation of quantum theory, including a discussion of Compton scattering (a photon reflecting off an electron). Besides all of this, he takes pains to discuss the meaning of coupling constants, the Planck scale, and the related but distinct concept of scale invariance. All in all, Craig’s paper just might be the most comprehensive justification for Bridgman’s principle so far in existence!

Most scientists might shrug and say “who needs it?”. In the same way, not many scientists care to examine perpetual motion machines to find out where the flaw lies. In this respect, Craig is a craftsman of the first order — he cares deeply about the details. Unlike the Second Law of Thermodynamics, Bridgman’s Principle seems rarely to have been challenged. This only makes Craig’s defense of it all the more important. After all, it is especially those beliefs which we are disinclined to question that are most deserving of a critical examination.

math

Footnotes:

[1] Some physical principles, like the Relativity Principle, have never been given a constructive justification. For this reason, Einstein himself seems to have regarded the Relativity Principle with some suspicion. See this great discussion by Brown and Pooley.

[2] Why not just set it to N=1? Well, no reason why not! Then we would replace the meter by the `light second’, and the second by the `light-meter’. And we would say things like “Today I walked 0.3 millionths of a light second to buy an ice-cream, and it took me just 130 billion light-meters to eat it!” So, you know, that would be a bit weird. But theorists do it all the time.

[3] To be perfectly strict, we cannot assume that a wristwatch will behave in the same way as an atomic clock in response to changes in absolute properties; we would have to derive their behavior constructively from their atomic description. This is exactly why a general constructive proof of Bridgman’s Principle would be so hard, and why Craig is forced to stick with simple models of clocks and rulers.

Advertisements

A meditation on physical units: Part 1

[Preface: A while back, Michael Raymer, a professor at the University of Oregon, drew my attention to a curious paper by Craig Holt, who tragically passed away in 2014 [1]. Michael wrote:
“Dear Jacques … I would be very interested in knowing your opinion of this paper,
since Craig was not a professional academic, and had little community in
which to promote the ideas. He was one of the most brilliant PhD students
in my graduate classes back in the 1970s, turned down an opportunity to
interview for a position with John Wheeler, worked in industry until age
50 when he retired in order to spend the rest of his time in self study.
In his paper he takes a Machian view, emphasizing the relational nature of
all physical quantities even in classical physics. I can’t vouch for the
technical correctness of all of his results, but I am sure they are
inspiring.”

The paper makes for an interesting read because Holt, unencumbered by contemporary fashions, freely questions some standard assumptions about the meaning of `mass’ in physics. Probably because it was a work in progress, Craig’s paper is missing some of the niceties of a more polished academic work, like good referencing and a thoroughly researched introduction that places the work in context (the most notable omission is the lack of background material on dimensional analysis, which I will talk about in this post). Despite its rough edges, Craig’s paper led me down quite an interesting rabbit-hole, of which I hope to give you a glimpse. This post covers some background concepts; I’ll mention Craig’s contribution in a follow-up post. ]

______________
Imagine you have just woken up after a very bad hangover. You retain your basic faculties, such as the ability to reason and speak, but you have forgotten everything about the world in which you live. Not just your name and address, but your whole life history, family and friends, and entire education are lost to the epic blackout. Using pure thought, you are nevertheless able to deduce some facts about the world, such as the fact that you were probably drinking Tequila last night.

RM_hangover
The first thing you notice about the world around you is that it can be separated into objects distinct from yourself. These objects all possess properties: they have colour, weight, smell, texture. For instance, the leftover pizza is off-yellow, smells like sardines and sticks to your face (you run to the bathroom).

While bending over the toilet for an extended period of time, you notice that some properties can be easily measured, while others are more intangible. The toilet seems to be less white than the sink, and the sink less white than the curtains. But how much less? You cannot seem to put a number on it. On the other hand, you know from the ticking of the clock on the wall that you have spent 37 seconds thinking about it, which is exactly 14 seconds more than the time you spent thinking about calling a doctor.

You can measure exactly how much you weigh on the bathroom scale. You can also see how disheveled you look in the mirror. Unlike your weight, you have no idea how to quantify the amount of your disheveled-ness. You can say for sure that you are less disheveled than Johnny Depp after sleeping under a bridge, but beyond that, you can’t really put a number on it. Properties like time, weight and blood-alcohol content can be quantified, while other properties like squishiness, smelliness and dishevelled-ness are not easily converted into numbers.

You have rediscovered one of the first basic truths about the world: all that we know comes from our experience, and the objects of our experience can only be compared to other objects of experience. Some of those comparisons can be numerical, allowing us to say how much more or less of something one object has than another. These cases are the beginning of scientific inquiry: if you can put a number on it, then you can do science with it.

Rulers, stopwatches, compasses, bathroom scales — these are used as reference objects for measuring the `muchness’ of certain properties, namely, length, duration, angle, and weight. Looking in your wallet, you discover that you have exactly 5 dollars of cash, a receipt from a taxi for 30 dollars, and you are exactly 24 years old since yesterday night.

You reflect on the meaning of time. A year means the time it takes the Earth to go around the Sun, or approximately 365 and a quarter days. A day is the time it takes for the Earth to spin once on its axis. You remember your school teacher saying that all units of time are defined in terms of seconds, and one second is defined as 9192631770 oscillations of the light emitted by a Caesium atom. Why exactly 9192631770, you wonder? What if we just said 2 oscillations? A quick calculation shows that this would make you about 110 billion years old according to your new measure of time. Or what about switching to dog years, which are 7 per human year? That would make you 168 dog years old. You wouldn’t feel any different — you would just be having a lot more birthday parties. Given the events of last night, that seems like a bad idea.

You are twice as old as your cousin, and that is true in dog years, cat years, or clown years [2]. Similarly, you could measure your height in inches, centimeters, or stacked shot-glasses — but even though you might be 800 rice-crackers tall, you still won’t be able to reach the aspirin in the top shelf of the cupboard. Similarly, counting all your money in cents instead of dollars will make it a bigger number, but won’t actually make you richer. These are all examples of passive transformations of units, where you imagine measuring something using one set of units instead of another. Passive transformations change nothing in reality: they are all in your head. Changing the labels on objects clearly cannot change the physical relationships between them.

Things get interesting when we consider active transformations. If a passive transformation is like saying the length of your coffee table is 100 times larger when measured in cm than when measured in meters, then an active transformation would be if someone actually replaced your coffee table with a table 100 times bigger. Now, obviously you would notice the difference because the table wouldn’t fit in your apartment anymore. But imagine that someone, in addition to replacing the coffee table, also replaced your entire apartment and everything in it with scaled-up models 100 times the size. And imagine that you also grew to into a giant 100 times your original size while you were sleeping. Then when you woke up, as a giant inside a giant apartment with a giant coffee table, would you realise anything had changed? And if you made yourself a giant cup of coffee, would it make your giant hangover go away?

Kafka
Or if you woke up as a giant bug?

We now come to one of the deepest principles of physics, called Bridgman’s Principle of absolute significance of relative magnitude, named for our old friend Percy Bridgman. The Principle says that only relative quantities can enter into the laws of physics. This means that, whatever experiments I do and whatever measurements I perform, I can only obtain information about the relative sizes of quantities: the length of the coffee table relative to my ruler, or the mass of the table relative to the mass of my body, etc. According to this principle, actively changing the absolute values of some quantity by the same proportion for all objects should not affect the outcomes of any experiments we could perform.

To get a feeling for what the principle means, imagine you are a primitive scientist. You notice that fruit hanging from trees tends to bob up and down in the wind, but the heavier fruits seems to bounce more slowly than the lighter fruits (for those readers who are physics students, I’m talking about a mass on a spring here). You decide to discover the law that relates the frequency of bobbing motion to the mass of the fruit. You fill a sack with some pebbles (carefully chosen to all have the same weight) and hang it from a tree branch. You can measure the mass of the sack by counting the number of pebbles in it, but you still need a way to measure the frequency of the bobbing. Nearby you hear the sound of water dripping from a leaf into a pond. You decide to measure the frequency by how many times the sack bobs up and down in between drips of water. Now you are ready to do your experiment.

You measure the bobbing frequency of the sack for many different masses, and record the results by drawing in the dirt with a stick. After analysing your data, you discover that the frequency f (in oscillations per water drop) is related to the mass m (in pebbles) by a simple formula:

HookeSpring
where k stands for a particular number, say 16.8. But what does this number really mean?

Unbeknownst to you, a clever monkey was watching you from the bushes while you did the experiment. After you retire to your cave to sleep, the monkey comes out to play a trick on you. He carefully replaces each one of your pebbles with a heavier pebble of the same size and appearance, and makes sure that all of the heavier pebbles are the same weight as each other. He takes away the original pebbles and hides them. The next day, you repeat the experiment in exactly the same way, but now you discover that the constant k has changed from yesterday’s value of 16.8 to the new value of 11.2. Does this mean that the law of nature that governs the bobbing of things hanging from the tree has changed overnight? Or should you decide that the law is the same, but that the units that you used to measure frequency and mass have changed?

You decide to apply Bridgman’s Principle. The principle says that if (say) all the masses in the experiment were changed by the same proportion, then the laws of physics would not allow us to see any difference, provided we used the same measuring units. Since you do see a difference, Bridgman’s Principle says that it must be the units (and not the law itself) that has changed. `These must be different pebbles’ you say to yourself, and you mark them by scratching an X onto them. You go out looking for some other pebbles and eventually you find a new set of pebbles which give you the right value of 16.8 when you perform the experiment. `These must be the same kind of pebbles that I used in the original experiment’ you say to yourself, and you scratch an O on them so that you won’t lose them again. Ha! You have outsmarted the monkey.

Larson_rocks

Notice that as long as you use the right value for k — which depends on whether you measure the mass using X or O pebbles — then the abstract equation (1) remains true. In physics language, you are interpreting k as a dimensional constant, having the dimensions of  frequency times √mass. This means that if you use different units for measuring frequency or mass, the numerical value of k has to change in order to preserve the law. Notice also that the dimensions of k are chosen so that equation (1) has the same dimensions on each side of the equals sign. This is called a dimensionally homogeneous equation. Bridgman’s Principle can be rephrased as saying that all physical laws must be described by dimensionally homogeneous equations.

Bridgman’s Principle is useful because it allows us to start with a law expressed in particular units, in this case `oscillations per water-drop’ and `O-pebbles’, and then infer that the law holds for any units. Even though the numerical value of k changes when we change units, it remains the same in any fixed choice of units, so it represents a physical constant of nature.

The alternative is to insist that our units are the same as before (the pebbles look identical after all). That means that the change in k implies a change in the law itself, for instance, it implies that the same mass hanging from the tree today will bob up and down more slowly than it did yesterday. In our example, it turns out that Bridgman’s Principle leads us to the correct conclusion: that some tricky monkey must have switched our pebbles. But can the principle ever fail? What if physical laws really do change?

Suppose that after returning to your cave, the tricky monkey decides to have another go at fooling you. He climbs up the tree and whispers into its leaves: `Do you know why that primitive scientist is always hanging things from your branch? She is testing how strong you are! Make your branches as stiff and strong as you can tomorrow, and she will reward you with water from the pond’.

The next day, you perform the experiment a third time — being sure to use your `O-pebbles’ this time — and you discover again that the value of k seems to have changed. It now takes many more pebbles to achieve a given frequency than it did on the first day. Using Bridgman’s Principle, you again decide that something must be wrong with your measuring units. Maybe this time it is the dripping water that is wrong and needs to be adjusted, or maybe you have confidence in the regularity of the water drip and conclude that the `O-pebbles’ have somehow become too light. Perhaps, you conjecture, they were replaced by the tricky monkey again? So you throw them out and go searching for some heavier pebbles. You find some that give you the right value of k=16.8, and conclude that these are the real `O-pebbles’.

The difference is that this time, you were tricked! In fact the pebbles you threw out were the real `O-pebbles’. The change in k came from the background conditions of the experiment, namely the stiffness in the tree branches, which you did not consider as a physical variable. Hence, in a sense, the law that relates bobbing frequency to mass (for this tree) has indeed changed [3].

You thought that the change in the constant k was caused by using the wrong measuring units, but in fact it was due to a change in the physical constant k itself. This is an example of a scenario where a physical constant turns out not to be constant after all. If we simply assume Bridgman’s Principle to be true without carefully checking whether it is justified, then it is harder to discover situations in which the physical constants themselves are changing. So, Bridgman’s Principle can be thought of as the assumption that the values of physical constants (expressed in some fixed units) don’t change over time. If we are sure that the laws of physics are constant, then we can use the Principle to detect changes or inaccuracies in our measuring devices that define the physical units — i.e. we can leverage the laws of physics to improve the accuracy of our measuring devices.

We can’t always trust our measuring units, but the monkey also showed us that we can’t always trust the laws of physics. After all, scientific progress depends on occasionally throwing out old laws and replacing them with more accurate ones. In our example, a new law that includes the tree-branch stiffness as a variable would be the obvious next step.

One of the more artistic aspects of the scientific method is knowing when to trust your measuring devices, and when to trust the laws of physics [4]. Progress is made by `bootstrapping’ from one to the other: first we trust our units and use them to discover a physical law, and then we trust in the physical law and use it to define better units, and so on. It sounds like a circular process, but actually it represents the gradual refinement of knowledge, through increasingly smaller adjustments from different angles. Imagine trying to balance a scale by placing handfuls of sand on each side. At first you just dump about a handful on each side and see which is heavier. Then you add a smaller amount to the lighter side until it becomes heavier. Then you add an even smaller amount to the other side until it becomes heavier, and so on, until the scale is almost perfectly balanced. In a similar way, switching back and forth between physical laws and measurement units actually results in both the laws and measuring instruments becoming more accurate over time.

______________

[1] It is a shame that Craig’s work remains incomplete, because I think physicists could benefit from a re-examination of the principles of dimensional analysis. Simplified dimensional arguments are sometimes invoked in the literature on quantum gravity without due consideration for their meaning.

[2] Clowns have several birthdays a week, but they aren’t allowed to get drunk at them, which kind of defeats the purpose if you ask me.

[3] If you are uncomfortable with treating the branch stiffness as part of the physical law, imagine instead that the strength of gravity actually becomes weaker overnight.

[4] This is related to a deep result in the philosophy of science called the Duhem-Quine Thesis.
Quoth Duhem: `If the predicted phenomenon is not produced, not only is the questioned proposition put into doubt, but also the whole theoretical scaffolding used by the physicist’.

Bootstrapping to quantum gravity

Kepler

“If … there were no solid bodies in nature there would be no geometry.”
-Poincaré

A while ago, I discussed the mystery of why matter should be the source of gravity. To date, this remains simply an empirical fact. The deep insight of general relativity – that gravity is the geometry of space and time – only provides us with a modern twist: why should matter dictate the geometry of space-time?

There is a possible answer, but it requires us to understand space-time in a different way: as an abstraction that is derived from the properties of matter itself. Under this interpretation, it is perfectly natural that matter should affect space-time geometry, because space-time is not simply a stage against which matter dances, but is fundamentally dependent on matter for its existence. I will elaborate on this idea and explain how it leads to a new avenue of approach to quantum gravity.

First consider what we mean when we talk about space and time. We can judge how far away a train is by listening to the tracks, or gauge how deep a well is by dropping a stone in and waiting to hear the echo. We can tell a mountain is far away just by looking at it, and that the cat is nearby by tripping over it. In all these examples, an interaction is necessary between myself and the object, sometimes through an intermediary (the light reflected off the mountain into my eyes) and sometimes not (tripping over the cat). Things can also be far away in time. I obviously cannot interact with people who lived in the past (unless I have a time machine), or people who have yet to be born, even if they stood (or will stand) exactly where I am standing now. I cannot easily talk to my father when he was my age, but I can almost do it, just by talking to him now and asking him to remember his past self. When we say that something is far away in either space or time, what we really mean is that it is hard to interact with, and this difficulty of interaction has certain universal qualities that we give the names `distance’ and `time’.
It is worth mentioning here, as an aside, that in a certain sense, the properties of `time’ can be reduced to properties of `distance’ alone. Consider, for instance, that most of our interactions can be reduced to measurements of distances of things from us, at a given time. To know the time, I invariably look at the distance the minute hand has traversed along its cycle on the face of my watch. Our clocks are just systems with `internal’ distances, and it is the varying correspondence of these `clock distances’ with the distances of other things that we call the `time’. Indeed, Julian Barbour has developed this idea into a whole research program in which dynamics is fundamentally spatial, called Shape Dynamics.

Sigmund Freud Museum, Wien – Peter Kogler

So, if distance and time is just a way of describing certain properties of matter, what is the thing we call space-time?

We now arrive at a crucial point that has been stressed by philosopher Harvey Brown: the rigid rods and clocks with which we claim to measure space-time do not really measure it, in the traditional sense of the word `measure’. A measurement implies an interaction, and to measure space-time would be to grant space-time the same status as a physical body that can be interacted with. (To be sure, this is exactly how many people do wish to interpret space-time; see for instance space-time substantivalism and ontological structural realism).

Brown writes:
“One of Bell’s professed aims in his 1976 paper on `How to teach relativity’ was to fend off `premature philosophizing about space and time’. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatio-temporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of space-time — Galilean or Minkowskian, say — it is immersed in?” [1]

I claim that rods and clocks do not measure space-time, they embody space-time. Space-time is an idealized description of how material rods and clocks interact with other matter. This distinction is important because it has implications for quantum gravity. If we adopt the more popular view that space-time is an independently existing ontological construct, it stands to reason that, like other classical fields, we should attempt to directly quantise the space-time field. This is the approach adopted in Loop Quantum Gravity and extolled by Rovelli:

“Physical reality is now described as a complex interacting ensemble of entities (fields), the location of which is only meaningful with respect to one another. The relation among dynamical entities of being contiguous … is the foundation of the space-time structure. Among these various entities, there is one, the gravitational field, which interacts with every other one and thus determines the relative motion of the individual components of every object we want to use as rod or clock. Because of that, it admits a metrical interpretation.” [2]

One of the advantages of this point of view is that it dissolves some seemingly paradoxical features of general relativity, such as the fact that geometry can exist without (non-gravitational) matter, or the fact that geometry can carry energy and momentum. Since gravity is a field in its own right, it doesn’t depend on the other fields for its existence, nor is there any problem with it being able to carry energy. On the other hand, this point of view tempts us into framing quantum gravity as the mathematical problem of quantising the gravitational field. This, I think, is misguided.

I propose instead to return to a more Machian viewpoint, according to which space-time is contingent on (and not independent of) the existence of matter. Now the description of quantum space-time should follow, in principle, from an appropriate description of quantum matter, i.e. of quantum rods and clocks. From this perspective, the challenge of quantum gravity is to rebuild space-time from the ground up — to carry out Einstein’s revolution a second time over, but using quantum material as the building blocks.

Ernst Mach vs. Max Ernst. Get it right, folks.

My view about space-time can be seen as a kind of `pulling oneself up by one’s bootstraps’, or a Wittgenstein’s ladder (in which one climbs to the top of a ladder and then throws the ladder away). It works like this:
Step 1: define the properties of space-time according to the behaviour of rods and clocks.
Step 2: look for universal patterns or symmetries among these rods and clocks.
Step 3: take the ideal form of this symmetry and promote it to an independently existing object called `space-time’.
Step 4: Having liberated space-time from the material objects from which it was conceived, use it as the independent standard against which to compare rods and clocks.

Seen in this light, the idea of judging a rod or a clock by its ability to measure space or time is a convenient illusion: in fact we are testing real rods and clocks against what is essentially an embodiment of their own Platonic ideals, which are in turn conceived as the forms which give the laws of physics their most elegant expression. A pertinent example, much used by Julian Barbour, is Ephemeris time and the notion of a `good clock’. First, by using material bodies like pendulums and planets to serve as clocks, we find that the motions of material bodies approximately conform to Newton’s laws of mechanics and gravitation. We then make a metaphysical leap and declare the laws to be exactly true, and the inaccuracies to be due to imperfections in the clocks used to collect the data. This leads to the definition of the `Ephemeris time’, the time relative to which the planetary motions conform most closely to Newton’s laws, and a `good clock’ is then defined to be a clock whose time is closest to Ephemeris time.

The same thing happens in making the leap to special relativity. Einstein observed that, in light of Maxwell’s theory of electromagnetism, the empirical law of the relativity of motion seemed to have only a limited validity in nature. That is, assuming no changes to the behaviour of rods and clocks used to make measurements, it would not be possible to establish the law of the relativity of motion for electrodynamic bodies. Einstein made a metaphysical leap: he decided to upgrade this law to the universal Principle of Relativity, and to interpret its apparent inapplicability to electromagnetism as the failure of the rods and clocks used to test its validity. By constructing new rods and clocks that incorporated electromagnetism in the form of hypothetical light beams bouncing between mirrors, Einstein rebuilt space-time so as to give the laws of physics a more elegant form, in which the Relativity Principle is valid in the same regime as Maxwell’s equations.

Ladder for Booker T. Washington – Martin Puryear

By now, you can guess how I will interpret the step to general relativity. Empirical observations seem to suggest a (local) equivalence between a uniformly accelerated lab and a stationary lab in a gravitational field. However, as long as we consider `ideal’ clocks to conform to flat Minkowski space-time, we have to regard the time-dilated clocks of a gravitationally affected observer as being faulty. The empirical fact that observers stationary in a gravitational field cannot distinguish themselves (locally) from uniformly accelerated observers then seems accidental; there appears no reason why an observer could not locally detect the presence of gravity by comparing his normal clock to an `ideal clock’ that is somehow protected from gravity. On the other hand, if we raise this empirical indistinguishability to a matter of principle – the Einstein Equivalence Principle – we must conclude that time dilation should be incorporated into the very definition of an `ideal’ clock, and similarly with the gravitational effects on rods. Once the ideal rods and clocks are updated to include gravitational effects as part of their constitution (and not an interfering external force) they give rise to a geometry that is curved. Most magically of all, if we choose the simplest way to couple this geometry to matter (the Einstein Field Equations), we find that there is no need for a gravitational force at all: bodies follow the paths dictated by gravity simply because these are now the inertial paths followed by freely moving bodies in the curved space-time. Thus, gravity can be entirely replaced by geometry of space-time.

As we can see from the above examples, each revolution in our idea of space-time was achieved by reconsidering the nature of rods and clocks, so as to make the laws of physics take a more elegant form by incorporating some new physical principle (eg. the Relativity and Equivalence principles). What is remarkable is that this method does not require us to go all the way back to the fundamental properties of matter, prior to space-time, and derive everything again from scratch (the constructive theory approach). Instead, we can start from a previously existing conception of space-time and then upgrade it by modifying its primary elements (rods and clocks) to incorporate some new principle as part of physical law (the principle theory approach). The question is, will quantum gravity let us get away with the same trick?

I’m betting that it will. The challenge is to identify the empirical principle (or principles) that embody quantum mechanics, and upgrade them to universal principles by incorporating them into the very conception of the rods and clocks out of which general relativistic space-time is made. The result will be, hopefully, a picture of quantum geometry that retains a clear operational interpretation. Perhaps even Percy Bridgman, who dismissed the Planck length as being of “no significance whatever” [3] due to its empirical inaccessibility, would approve.

Boots with laces – Van Gogh

[1] Brown, Physical Relativity, p8.
[2] Rovelli, `Halfway through the woods: contemporary research on space and time’, in The Cosmos of Science, p194.
[3] Bridgman, Dimensional Analysis, p101.

Why Quantum Gravity needs Operationalism: Part 2

(Update: My colleagues pointed out that Wittgenstein was one of the greatest philosophers of the 20th century and I should not make fun of him, and anyway he was only very loosely associated with the Vienna circle. All well and true — but he was at least partly responsible for the idea that got the Vienna Circle onto Verificationism, and all of you pedants can go look at the references if you don’t believe me.)

“Where neither confirmation nor refutation is possible, science is not concerned.”    — Mach

Some physicists give philosophy a bad rap. I like to remind them that all the great figures in physics had a keen interest in philosophy, and were strongly influenced by the work of philosophers. Einstein made contributions to philosophy as well as physics, as did Ernst Mach, whose philosophical work had a strong influence on Einstein in formulating his General Theory of Relativity. In his own attitude to philosophy, Einstein was a self-described “epistemological opportunist” [1]. (Epistemology is, broadly speaking, the philosophy of knowledge and how it is acquired.) But philosophy sometimes gets in the way of progress, as explained in the following story.

A physicist was skipping along one day when he came upon a philosopher, standing rigid in the forest. “Why standeth you thus?” he inquired.

“I am troubled by a paradox!” said the philosopher. “How is it that things can move from place to place?”

“What do you mean? I moved here by skipping, didn’t I?”

“Yes, sure. But I cannot logically explain why the world allows it to be so. You see, a philosopher named Zeno argued that in order to traverse any finite distance, one would have to first traverse an infinite number of partitions of that distance. But how can one make sense of completing an infinite number of tasks in a finite amount of time?”

“Well dang,” said the physicist “that’s an interesting question. But wait! Could it be that space and time are actually divided up into a finite number of tiny chunks that cannot be sub-divided further? What an idea!”

“Ah! Perhaps,” says the philosopher, “but what if the world is indeed a continuum? Then we are truly stuck.”

At that moment, a mathematician who had been dozing in a tree fell out and landed with a great commotion.

“Terribly sorry! Couldn’t help but overhear,” he said. “In fact I do believe it is conceptually possible for an infinite number of things to add up to a finite quantity. Why, this gives me a great idea for calculating the area under curves. Thank you so much, I’d better get to it!”

“Yes, yes we must dash at once! There’s work to do!” agreed the physicist.

“But wait!” cried the philosopher, “what if time is merely an illusion? And what is the connection of abstract mathematics to the physical world? We have to work that out first!”

But the other two had already disappeared, leaving the philosopher in his forest to ponder his way down deeper and ever more complex rabbit-holes of thought.

***

Philosophy is valuable for pointing us in the right direction and helping us to think clearly. Sometimes philosophy can reveal a problem where nobody thought there was one, and this can lead to a new insight. Sometimes philosophy can identify and cure fallacies in reasoning. In solving a problem, it can highlight alternative solutions that might not have been noticed otherwise. But ultimately, physicists only tend to turn to philosophy when they have run out of ideas, and most of the time the connection of philosophy to practical matters seems tenuous at best. If philosophers have a weakness, it is only that they tend to think too much, whereas a physicist only thinks as hard as he needs to in order to get results.

After that brief detour, we are ready to return to our hero — physicist Percy Bridgman — and witness his own personal fling and falling-out with philosophy. In a previous post, we introduced Bridgman’s idea of operationalism. Recall that Bridgman emphasized that a physical quantity such as `length’ or `temperature’ should always be attached to some clear notion of how to measure that quantity in an experiment. It is not much of a leap from there to say that a concept is only meaningful if it comes equipped with instructions of how to measure it physically.

Although Bridgman was a physicist, his idea quickly caught on amongst philosophers, who saw in it the potential for a more general theory of meaning. But Bridgman quickly became disillusioned with the direction the philosophers were taking as it became increasingly clear that operationalism could not stand up to the demanding expectations set by the philosophers.

The main culprits were a group of philosophers called the Vienna Circle [2]. Following an idea of Ludwig Wittgenstein, these philosophers attempted to define concepts as meaningful only if they could somehow be verified in principle, an approach that became known as Verificationism. Verificationism was a major theme of the school of thought called `logical empiricism’ (aka logical positivism), the variants of which are embodied in the combined work of philosophers in the Vienna Circle, notably Reichenbach, Carnap and Schlick, as well as members outside the group, like the Berlin Society.

At that time, Bridgman’s operationalism was closely paralleled by the ideas of the Verificationists. This was unfortunate because around the middle of the 20th century it became increasingly apparent that there were big philosophical problems with this idea. On the physics side of things, the philosophers realized that there could be meaningful concepts that could not be directly verified. Einstein pointed out that we cannot measure the electric field inside a solid body, yet it is still meaningful to define the field at all points in space:

“We find that such an electrical continuum is always applicable only for the representation of electrical states of affairs in the interior of ponderable bodies. Here too we define the vector of electric field strength as the vector of the mechanical force exerted on the unit of positive electric quantity inside a body. But the force so defined is no longer directly accessible to experiments. It is one part of a theoretical construction that can be correct or false, i.e., consistent or not consistent with experience, only as a whole.” [1]

Incidentally, Einstein got this point of view from a philosopher, Duhem, who argued that isolated parts of a theory are do not stand as meaningful on their own, but only when taken together as a whole can they be matched with empirical data. It therefore does not always make sense to isolate  some apparently metaphysical aspect of a theory and criticize it as not being verifiable. In a sense, the verifiability of an abstract quantity like the electric field hinges on its placement within a larger theoretical framework that extends to the devices used to measure the field.

In addition, the Verificationists began to fall apart over some rather technical philosophical points. It went something like this:

Wittgenstein: “A proposition is meaningful if and only if it is conceivable for the proposition to be completely verified!”

Others: “What about the statement `All dogs are brown’? I can’t very well check that all dogs are brown can I? Most of the dogs who ever lived are long dead, for a start.”

Wittgenstein: “Err…”

Others: “And what about this guy Karl Popper? He says nothing can ever be completely verified. Our theories are always wrong, they just get less wrong with time.”

Wittgenstein: *cough* *cough* I have to go now. (runs away).

Carnap: Look, we don’t have to take such a hard line. Statements like `All dogs are brown’ are still meaningful, even though they can’t be completely verified.

Schlick: No, no, you’ve got it wrong! Statements like `All dogs are brown’ are meaningless! They simply serve to guide us towards other statements that do have meaning.

Quine: No, you guys are missing a much worse problem with your definition: how do you determine which statements actually require verification (like `The cat sat on the mat’), and which ones are just true by definition (`All bachelors are unmarried’)? I can show that there is no consistent way to separate the two kinds of statement.

(Everybody’s head explodes)

So you can see how the philosophers tend to get carried away. And where was poor old Percy Bridgman during all this? He was backed into a corner, with people prodding his chest and shouting at him:

Gillies: “How do you tell if a measurement method is valid? If there is nothing more to a concept than its method of measurement, then every method of measurement is automatically valid!”

Bridgman: “Well, yes, I suppose…”

Positivists: “And isn’t it true that even if we all agree to use a single measurement of length, this does not come close to exhausting what we mean by the word length? How disappointing.”

Bridgman: “Now wait a minute –”

Margenau: “And just what the deuce do you mean by `operations’ anyhow?”

Bridgman: “Well, I … hey, aren’t you a physicist? You should be on my side!”

(Margenau discreetly melts into the crowd)

To cut a long story short, by the time Quine was stomping on the ashes of what once was logical empiricism, Bridgman’s operationalism had suffered a similar fate, leaving Bridgman battered and bloody on the sidelines wondering where he went wrong:

“To me now it seems incomprehensible that I should ever have thought it within my powers … to analyze so thoroughly the functioning of our thinking apparatus that I could confidently expect to exhaust the subject and eliminate the possibility of a bright new idea against which I would be defenseless.”

To console himself, Bridgman retreated to his laboratory where he at least knew what things were, and could spend hours hand-drilling holes in blocks of steel without having to waste his time arguing about it. Sometimes the positivists would prod him, saying:

“Bridgman! Hey Bridgman! If I measure the height of the Eiffel tower, does that count as an operation, or do you have to perform every experiment yourself?” to which Bridgman would narrow his eyes and mutter: “I don’t trust any experimental results except the ones I perform myself. Now leave me alone!”

Needless to say, Bridgman’s defiantly anti-social attitude to science did not help improve the standing of operationalism among philosophers or physicists; few people were prepared to agree that every experiment has to be verified by an individual for him or herself. Nevertheless, Bridgman remained a heroic figure and a defender of the scientific method as the best way to cope with an otherwise incomprehensible and overwhelming universe. Bridgman’s stubborn attitude of self-reliance was powerfully displayed in his final act: he committed suicide by gunshot wound after being diagnosed with metastatic cancer. In his suicide note, he wrote [3]:

“It isn’t decent for society to make a man do this thing himself. Probably this is the last day I will be able to do it myself.”

Bridgman’s original conception of operationalism continues to resonate with physicists to this very day. In the end he was forced to admit that it did not constitute a rigorous philosophical doctrine of meaning, and he retracted some of his initially over-optimistic statements. However, he never gave up the more pragmatic point of view that an operationalist attitude can be beneficial to the practicing scientist. Towards the end of his life, he maintained that:

“…[T]here is nothing absolute or final about an operational analysis […]. So far as any dogma is involved here at all, it is merely the conviction that it is better, because it takes us further, to analyze into doings or happenings rather than into objects or entities.”

Bridgman2

[1]  See the SEP entry on Einstein’s philosophy: http://plato.stanford.edu/entries/einstein-philscience/

[2] SEP entry on the Vienna Circle: http://plato.stanford.edu/entries/vienna-circle/

[3] Sherwin B Nuland, “How We Die: Reflections on Life’s Final Chapter” Random House 1995

Why quantum gravity needs operationalism: Part 1

This is the first of a series of posts in which I will argue that physicists can gain insight into the puzzles of quantum gravity if we adopt a philosophy I call operationalism. The traditional interpretation of operationalism by philosophers was found to be lacking in several important ways, so the concept will have to be updated to a modern context if we are to make use of it, and its new strengths and limitations will need to be clarified. The goal of this first post is to introduce you to operationalism as it was originally conceived and as I understand it. Later posts will explain the areas in which it failed as a philosophical doctrine, and why it might nevertheless succeed as a tool in theoretical physics, particularly in regard to quantum gravity [1].

Operationalism started with Percy Williams Bridgman. Bridgman was a physicist working in the early 20th century, at the time when the world of physics was being shaken by the twin revolutions of relativity and quantum mechanics. Einstein’s hand was behind both revolutions: first through the publication of his theory of General Relativity in 1916, and second for explaining the photoelectric effect using things called quanta, which earned him the Nobel prize in 1921. This upheaval was a formative time for Bridgman, who was especially struck by Einstein’s clever use of thought experiments to derive special relativity.

Einstein had realized that there was a problem with the concept of `simultaneity’. Until then, everybody had taken it for granted that if two events are simultaneous, then they occur at the same time no matter who is observing them. But Einstein asked the crucial question: how does a person know that two events happened at the same time? To answer it, he had to adopt an operational definition of simultaneity: an observer traveling at constant velocity will consider two equidistant events to be simultaneous if beams of light emitted from each event reach the location of the observer at the same time, as measured by the observer’s clock (this definition can be further generalised to apply to any pair of events as seen by an observer in arbitrary motion).

From this, one can deduce that the relativity principle implies the relativity of simultaneity: two events that are simultaneous for one observer may not be simultaneous for another observer in relative motion. This is one of the key observations of special relativity. Bridgman noticed that Einstein’s deep insight relied upon taking an abstract concept, in this case simultaneity, and grounding it in the physical world by asking `what sort of operations must be carried out in order to measure this thing’?

For his own part, Bridgman was a brilliant experimentalist who won the Nobel prize in 1946 for his pioneering work on creating extremely high pressures in his laboratory. Using state-of-the-art technology, he created pressures up to 100,000 atmospheres, nearly 100 times greater than anybody before him, and then did what any good scientist would do: he put various things into his pressure chamber to record what happened to them. Mostly, as you might expect, they got squished. At pressures beyond 25,000 atmospheres, steel can be molded like play-dough; at 50,000 atmospheres all normal liquids have frozen solid. (Of course, Bridgman’s vessel had to be very small to withstand such pressure, which limited the things he could put in it). But Bridgman faced a unique problem: the pressures that he created were so high that he couldn’t use any standard pressure gauge to measure the pressures in his lab because the gauge would basically get squished like everything else. The situation is the same as trying to measure the temperature of the sun using a regular thermometer: it would explode and vaporize before you could even take a proper reading. Consequently, Bridgman had no scientific way to tell between `really high pressure’ and `really freaking high pressure’, so he was forced to design completely new ways of measuring pressure in his laboratory, such as looking at the phase transition of the element Bismuth and the resistivity of the alloy Manganin [2]. This led him to wonder: what does a concept like `pressureor `temperature’ really mean in the absence of a measuring technique?

Bridgman proposed that quantities measured by different operations should always be regarded as being fundamentally different, even though they may coincide in certain situations. This led to a minor problem in the definitions of quantities. The temperature of a cup of water is measured by sticking a thermometer in it. The temperature of the sun is measured by looking at the spectrum of radiation emitted from it. If these quantities are measured by such different methods in different regimes, why do we call them both `temperature’? In what sense are our operations measuring the same thing? The solution, according to Bridgman, is that there is a regime in between the two in which both methods of measuring temperature are valid – and in this regime the two measurements must agree. The temperature of molten gold could potentially be measured by the right kind of thermometer, as well as by looking at its radiation spectrum, and both of these methods will give the same temperature. This allows us to connect the concept of temperature on the sun to temperature in your kitchen and call them by the same name.

This method of `patching together’ different ways of measuring the same quantity is reminiscent of placing co-ordinate patches on manifolds in mathematical physics. In general, there is no way to cover an entire manifold (representing space-time for example) with a single set of co-ordinates that are valid everywhere. But we can cover different parts of the manifold in patches, provided that the co-ordinates agree in the areas where they overlap. The key insight is that there is no observer who can see all of space-time at once – any physical observer has to travel from one part of the manifold to another by a continuous route. Hence it does not matter if the observer cannot describe the entire manifold by a single map, so long as they have a series of maps that smoothly translate into one another as they travel along their chosen path – even if the maps used much later in the journey have no connection or overlap with the maps used early in the journey. Similarly, as we extend our measuring devices into new regimes, we must gradually replace them with new devices as we go. The eye is replaced with the microscope, the microscope with the electron microscope and the electron microscope with the particle accelerator, which now bears no resemblance to the eye, although they both gaze upon the same world.

Curiously, there was another man named Bridgman active around the same time, who is likely to be more familiar to artists: that is George Bridgman, author of Bridgman’s Complete Guide to Drawing From Life. Although they were two completely different Bridgmans, working in different disciplines, both of them were concerned with essentially the same problem: how to connect our internal conception of the world with the devices by which we measure the world. In the case of Percy Bridgman, it was a matter of connecting abstract physical quantities to their measurement devices, while George Bridgman aimed to connect the figure in the mind to the functions of the hands and eyes. We close with a quote from the artist:

“Indeed, it is very far from accurate to say that we see with our eyes. The eye is blind but for the idea behind the eye.”

[1] Everything I have written comes from Hasok Chang’s entry in the Stanford Encyclopedia of Philosophy on operationalism, which is both clearer and more thorough than my own ramblings.

[2] Readers interested in the finer points of Percy Bridgman’s work should see his Nobel prize lecture.