# A meditation on physical units: Part 2

[Preface: This is the second part of my discussion of this paper by Craig Holt. It has a few more equations than usual, so strap a seat-belt onto your brain and get ready!]

“Alright brain. You don’t like me, and I don’t like you, but let’s get through this thing and then I can continue killing you with beer.”    — Homer Simpson

Imagine a whale. We like to say that the whale is big. What does that mean? Well, if we measure the length of the whale, say by comparing it to a meter-stick, we will count up a very large number of meters. However, this only tells us that the whale is big in comparison to a meter-stick. It doesn’t seem to tell us anything about the intrinsic, absolute length of the whale. But what is the meaning of `intrinsic, absolute’ length?

Imagine the whale is floating in space in an empty universe. There are no planets, people, fish or meter-sticks to compare the whale to. Maybe we could say that the whale has the property of length, even though we have no way of actually measuring its length. That’s what `absolute’ length means. We can imagine that it has some actual number, independently of any standard for comparison like a meter-stick.

In Craig’s Holt’s paper, this distinction — between measured and absolute properties — is very important. All absolute quantities have primes (also called apostrophes), so the absolute length of a whale would be written as whale-length’ and the absolute length of a meter-stick is written meter’. The length of the whale that we measure, in meters, can be written as the ratio whale-length’ / meter’ . This ratio is something we can directly measure, so it doesn’t need a prime, we can just call it whale-length: it is the number of meter sticks that equal a whale-length. It is clear that if we were to change all of the absolute lengths in the universe by the same factor, then the absolute properties whale-length’ and meter’ would both change, but the measurable property of whale-length would not change.

Ok, so, you’re probably thinking that it is weird to talk about absolute quantities if we can’t directly measure them — but who says that you can’t directly measure absolute quantities? I only gave you one example where, as it turned out, we couldn’t measure the absolute length. But one example is not a general proof. When you go around saying things like “absolute quantities are meaningless and therefore changes in absolute quantities can’t be detected”, you are making a pretty big assumption. This assumption has a name, it is called Bridgman’s Principle (see the last blog post).

Bridgman’s Principle is the reason why at school they teach you to balance the units on both sides of an equation. For example, `speed’ is measured in units of length per time (no, not milligrams — this isn’t Breaking Bad). If we imagine that light has some intrinsic absolute speed c’, then to measure it we would need to have (for example) some reference length L’ and some reference time duration T’ and then see how many lengths of L’ the light travels in time T’. We would write this equation as:

where C is the speed that we actually measure. Bridgman’s Principle says that a measured quantity like C cannot tell us the absolute speed of light c’, it only tells us what the value of c’ is compared to the values of our measuring apparatus, L’ and T’ (for example, in meters per second). If there were some way that we could directly measure the absolute value of c’ without comparing it to a measuring rod and a clock, then we could just write c’ = C without needing to specify the units of C. So, without Bridgman’s Principle, all of Dimensional Analysis basically becomes pointless.

So why should Bridgman’s Principle be true in general? Scientists are usually lazy and just assume it is true because it works in so many cases (this is called “proof by induction”). After all, it is hard to find a way of measuring the absolute length of something, without referring to some other reference object like a meter-stick. But being a good scientist is all about being really tight-assed, so we want to know if Bridgman’s Principle can be proven to be watertight.

A neat example of a watertight principle is the Second Law of Thermodynamics. This Law was also originally an inductive principle (it seemed to be true in pretty much all thermodynamic experiments) but then Boltzmann came along with his famous H-Theorem and proved that it has to be true if matter is made up of atomic particles. This is called a constructive justification of the principle [1].

The H Theorem makes it nice and easy to judge whether some crackpot’s idea for a perpetual motion machine will actually run forever. You can just ask them: “Is your machine made out of atoms?” And if the answer is `yes’ (which it probably is), then you can point out that the H-Theorem proves that machines made up of atoms must obey the Second Law, end of story.

Coming up with a constructive proof, like the H-Theorem, is pretty hard. In the case of Bridgman’s Principle, there are just too many different things to account for. Objects can have numerous properties, like mass, charge, density, and so on; also there are many ways to measure each property. It is hard to imagine how we could cover all of these different cases with just a single theorem about atoms. Without the H-Theorem, we would have to look over the design of every perpetual motion machine, to find out where the design is flawed. We could call this method “proof by elimination of counterexamples”. This is exactly the procedure that Craig uses to lend support to Bridgman’s Principle in his paper.

To get a flavor for how he does it, recall our measurement of the speed of light from equation (1). Notice that the measured speed C does not have to be the same as the absolute speed c’. In fact we can rewrite the equation as:

and this makes it clear that the number C that we measure is not itself an absolute quantity, but rather is a comparison between the absolute speed of light c’ and the absolute distance L’ per time T’. What would happen if we changed all of the absolute lengths in the universe? Would this change the value of the measured speed of light C? At first glance, you might think that it would, as long as the other absolute quantities on the left hand side of equation (2) are independent of length. But if that were true, then we would be able to measure changes in absolute length by observing changes in the measurable speed of light C, and this would contradict Bridgman’s Principle!

To get around this, Craig points out that the length L’ and time T’ are not fundamental properties of things, but are actually reducible to the atomic properties of physical rods and clocks that we use to make measurements. Therefore, we should express L’ and T’ in terms of the more fundamental properties of matter, such as the masses of elementary particles and the coupling constants of forces inside the rods and clocks. In particular, he argues that the absolute length of any physical rod is equal to some number times the “Bohr radius” of a typical atom inside the rod. This radius is in turn proportional to:

where h’, c’ are the absolute values of Planck’s constant and the speed of light, respectively, and m’e is the absolute electron mass. Similarly, the time duration measured by an atomic clock is proportional to:

As a result, both the absolute length L’ and time T’ actually depend on the absolute constants c’, h’ and the electron mass m’e. Substituting these into the expression for the measured speed of light, we get:

where X,Y are some proportionality constants. So, the factors of c’ cancel and we are left with C=X/Y. The numbers X and Y depend on how we construct our rods and clocks — for instance, they depend on how many atoms are inside the rod, and what kind of atom we use inside our atomic clock. In fact, the definition of a `meter’ and a `second’ are specially chosen so as to make this ratio exactly C=299,792,458 [2].

Now that we have included the fact that our measuring rods and clocks are made out of matter, we see that in fact the left hand side of equation (5) is independent of any absolute quantities. Therefore changing the absolute length, time, mass, speed etc. cannot have any effect on the measured speed of light C, and Bridgman’s principle is safe — at least in this example.

(Some readers might wonder why making a clock heavier should also make it run faster, as seems to be suggested by equation (4). It is important to remember that the usual kinds of clocks we use, like wristwatches, are quite complicated things containing trillions of atoms. To calculate how the behaviour of all these atoms would change the ticking of the overall clock mechanism would be, to put it lightly, a giant pain in the ass. That’s why Craig only considers very simple devices like atomic clocks, whose behaviour is well understood at the atomic level [3].)

Another simple model of a clock is the light clock: a beam of light bouncing between two mirrors separated by a fixed distance L’. Since light has no mass, you might think that the frequency of such a clock should not change if we were to increase all absolute masses in the universe. But we saw in equation (4) that the frequency of an atomic clock is proportional to the electron mass, and so it would increase. It then seems like we could measure this increase in atomic clock frequency by comparing it to a light clock, whose frequency does not change — and then we would know that the absolute masses had changed. Is this another threat to Bridgman’s Principle?

The catch is that, as Craig points out, the length L’ between the mirrors of the light clock is determined by a measuring rod, and the rod’s length is inversely proportional to the electron mass as we saw in equation (1). So if we magically increase all the absolute masses, we would also cause the absolute length L’ to get smaller, which means the light-clock frequency would increase. In fact, it would increase by exactly the same amount as the atomic clock frequency, so comparing them would not show us any difference! Bridgman’s Principle is saved again.

Let’s do one more example, this time a little bit more extreme. According to Einstein’s theory of general relativity, every lump of mass has a Schwarzschild radius, which is the radius of a sphere such that if you crammed all of the mass into this sphere, it would turn into a black hole. Given some absolute amount of mass M’, its Schwarzschild radius is given by the equation:

where c’ is the absolute speed of light from before, and G’ is the absolute gravitational constant, which determines how strong the gravitational force is. Now, glancing at the equation, you might think that if we keep increasing all of the absolute masses in the universe, planets will start turning into black holes. For instance, the radius of Earth is about 6370 km. This is the Schwarzschild radius for a mass of about a million times Earth’s mass. So if we magically increased all absolute masses by a factor of a million, shouldn’t Earth collapse into a black hole? Then, moments before we all die horribly, we would at least know that the absolute mass has changed, and Bridgman’s Principle was wrong.

Of course, that is only true if changing the absolute mass doesn’t affect the other absolute quantities in equation (6). But as we now know, increasing the absolute mass will cause our measuring rods to shrink, and our clocks to run faster. So the question is, if we scale the masses by some factor X, do all the X‘s cancel out in equation (6)?

Well, since our absolute lengths have to shrink, the Schwarzschild radius should shrink, so if we multiply M’ by X, then we should divide the radius R’ by X. This doesn’t balance! Hold on though — we haven’t dealt with the constants c’ and G’ yet. What happens to them? In the case of c’, we have c’ = C L’ / T’. Since L’ and T’ both decrease by a factor of X (lengths and time intervals get shorter) there is no overall effect on the absolute speed of light c’.

How do we measure the quantity G’? Well, G’ tells us how much two masses (measured relative to a reference mass m’) will accelerate towards each other due to their gravitational attraction. Newton’s law of gravitation says:

where N is some number that we can measure, and it depends on how big the two masses are compared to the reference mass m’, how large the distance between them is compared to the reference length L’, and so forth. If we measure the acceleration a’ using the same reference length and time L’,T’, then we can write:

where the A is just the measured acceleration in these units. Putting this all together, we can re-arrange equation (7) to get:

and we can define G = (A/N) as the actually measured gravitational constant in the chosen units. From equation (9), we see that increasing M’ by a factor of X, and hence dividing each instance of L’ and T’ by X, implies that the absolute constant G’ will actually change: it will be divided by a factor of X2.

What is the physics behind all this math? It goes something like this: suppose we are measuring the attraction between two masses separated by some distance. If we increase the masses, then our measuring rods shrink and our clocks get faster. This means that when we measure the accelerations, the objects seem to accelerate faster than before. This is what we expect, because two masses should become more attractive (at the same distance) when they become more massive. However, the absolute distance between the masses also has to shrink. The net effect is that, after increasing all the absolute masses, we find that the masses are producing the exact same attractive force as before, only at a closer distance. This means the absolute attraction at the original distance is weaker — so G’ has become weaker after the absolute masses in the universe have been increased (notice, however, that the actually measured value G does not change).

Returning now to equation (6), and multiplying M’ by X, dividing R’ by X and dividing G’ by X2, we find that all the extra factors cancel out. We conclude that increasing all the absolute masses in the universe by a factor of a million will not, in fact, cause Earth to turn into a black hole, because the effect is balanced out by the contingent changes in the absolute lengths and times of our measuring instruments. Whew!

Craig’s paper is long and very thorough. He compares a whole zoo of physical clocks, including electric clocks, light-clocks, freely falling inertial clocks, different kinds of atomic clocks and even gravitational clocks made from two orbiting planets. Not only does he generalize his claim to Newtonian mechanics, he covers general relativity as well, and the Dirac equation of quantum theory, including a discussion of Compton scattering (a photon reflecting off an electron). Besides all of this, he takes pains to discuss the meaning of coupling constants, the Planck scale, and the related but distinct concept of scale invariance. All in all, Craig’s paper just might be the most comprehensive justification for Bridgman’s principle so far in existence!

Most scientists might shrug and say “who needs it?”. In the same way, not many scientists care to examine perpetual motion machines to find out where the flaw lies. In this respect, Craig is a craftsman of the first order — he cares deeply about the details. Unlike the Second Law of Thermodynamics, Bridgman’s Principle seems rarely to have been challenged. This only makes Craig’s defense of it all the more important. After all, it is especially those beliefs which we are disinclined to question that are most deserving of a critical examination.

Footnotes:

[1] Some physical principles, like the Relativity Principle, have never been given a constructive justification. For this reason, Einstein himself seems to have regarded the Relativity Principle with some suspicion. See this great discussion by Brown and Pooley.

[2] Why not just set it to N=1? Well, no reason why not! Then we would replace the meter by the `light second’, and the second by the `light-meter’. And we would say things like “Today I walked 0.3 millionths of a light second to buy an ice-cream, and it took me just 130 billion light-meters to eat it!” So, you know, that would be a bit weird. But theorists do it all the time.

[3] To be perfectly strict, we cannot assume that a wristwatch will behave in the same way as an atomic clock in response to changes in absolute properties; we would have to derive their behavior constructively from their atomic description. This is exactly why a general constructive proof of Bridgman’s Principle would be so hard, and why Craig is forced to stick with simple models of clocks and rulers.

# A meditation on physical units: Part 1

[Preface: A while back, Michael Raymer, a professor at the University of Oregon, drew my attention to a curious paper by Craig Holt, who tragically passed away in 2014 [1]. Michael wrote:
“Dear Jacques … I would be very interested in knowing your opinion of this paper,
since Craig was not a professional academic, and had little community in
which to promote the ideas. He was one of the most brilliant PhD students
in my graduate classes back in the 1970s, turned down an opportunity to
interview for a position with John Wheeler, worked in industry until age
50 when he retired in order to spend the rest of his time in self study.
In his paper he takes a Machian view, emphasizing the relational nature of
all physical quantities even in classical physics. I can’t vouch for the
technical correctness of all of his results, but I am sure they are
inspiring.”

The paper makes for an interesting read because Holt, unencumbered by contemporary fashions, freely questions some standard assumptions about the meaning of `mass’ in physics. Probably because it was a work in progress, Craig’s paper is missing some of the niceties of a more polished academic work, like good referencing and a thoroughly researched introduction that places the work in context (the most notable omission is the lack of background material on dimensional analysis, which I will talk about in this post). Despite its rough edges, Craig’s paper led me down quite an interesting rabbit-hole, of which I hope to give you a glimpse. This post covers some background concepts; I’ll mention Craig’s contribution in a follow-up post. ]

______________
Imagine you have just woken up after a very bad hangover. You retain your basic faculties, such as the ability to reason and speak, but you have forgotten everything about the world in which you live. Not just your name and address, but your whole life history, family and friends, and entire education are lost to the epic blackout. Using pure thought, you are nevertheless able to deduce some facts about the world, such as the fact that you were probably drinking Tequila last night.

The first thing you notice about the world around you is that it can be separated into objects distinct from yourself. These objects all possess properties: they have colour, weight, smell, texture. For instance, the leftover pizza is off-yellow, smells like sardines and sticks to your face (you run to the bathroom).

While bending over the toilet for an extended period of time, you notice that some properties can be easily measured, while others are more intangible. The toilet seems to be less white than the sink, and the sink less white than the curtains. But how much less? You cannot seem to put a number on it. On the other hand, you know from the ticking of the clock on the wall that you have spent 37 seconds thinking about it, which is exactly 14 seconds more than the time you spent thinking about calling a doctor.

You can measure exactly how much you weigh on the bathroom scale. You can also see how disheveled you look in the mirror. Unlike your weight, you have no idea how to quantify the amount of your disheveled-ness. You can say for sure that you are less disheveled than Johnny Depp after sleeping under a bridge, but beyond that, you can’t really put a number on it. Properties like time, weight and blood-alcohol content can be quantified, while other properties like squishiness, smelliness and dishevelled-ness are not easily converted into numbers.

You have rediscovered one of the first basic truths about the world: all that we know comes from our experience, and the objects of our experience can only be compared to other objects of experience. Some of those comparisons can be numerical, allowing us to say how much more or less of something one object has than another. These cases are the beginning of scientific inquiry: if you can put a number on it, then you can do science with it.

Rulers, stopwatches, compasses, bathroom scales — these are used as reference objects for measuring the `muchness’ of certain properties, namely, length, duration, angle, and weight. Looking in your wallet, you discover that you have exactly 5 dollars of cash, a receipt from a taxi for 30 dollars, and you are exactly 24 years old since yesterday night.

You reflect on the meaning of time. A year means the time it takes the Earth to go around the Sun, or approximately 365 and a quarter days. A day is the time it takes for the Earth to spin once on its axis. You remember your school teacher saying that all units of time are defined in terms of seconds, and one second is defined as 9192631770 oscillations of the light emitted by a Caesium atom. Why exactly 9192631770, you wonder? What if we just said 2 oscillations? A quick calculation shows that this would make you about 110 billion years old according to your new measure of time. Or what about switching to dog years, which are 7 per human year? That would make you 168 dog years old. You wouldn’t feel any different — you would just be having a lot more birthday parties. Given the events of last night, that seems like a bad idea.

You are twice as old as your cousin, and that is true in dog years, cat years, or clown years [2]. Similarly, you could measure your height in inches, centimeters, or stacked shot-glasses — but even though you might be 800 rice-crackers tall, you still won’t be able to reach the aspirin in the top shelf of the cupboard. Similarly, counting all your money in cents instead of dollars will make it a bigger number, but won’t actually make you richer. These are all examples of passive transformations of units, where you imagine measuring something using one set of units instead of another. Passive transformations change nothing in reality: they are all in your head. Changing the labels on objects clearly cannot change the physical relationships between them.

Things get interesting when we consider active transformations. If a passive transformation is like saying the length of your coffee table is 100 times larger when measured in cm than when measured in meters, then an active transformation would be if someone actually replaced your coffee table with a table 100 times bigger. Now, obviously you would notice the difference because the table wouldn’t fit in your apartment anymore. But imagine that someone, in addition to replacing the coffee table, also replaced your entire apartment and everything in it with scaled-up models 100 times the size. And imagine that you also grew to into a giant 100 times your original size while you were sleeping. Then when you woke up, as a giant inside a giant apartment with a giant coffee table, would you realise anything had changed? And if you made yourself a giant cup of coffee, would it make your giant hangover go away?

We now come to one of the deepest principles of physics, called Bridgman’s Principle of absolute significance of relative magnitude, named for our old friend Percy Bridgman. The Principle says that only relative quantities can enter into the laws of physics. This means that, whatever experiments I do and whatever measurements I perform, I can only obtain information about the relative sizes of quantities: the length of the coffee table relative to my ruler, or the mass of the table relative to the mass of my body, etc. According to this principle, actively changing the absolute values of some quantity by the same proportion for all objects should not affect the outcomes of any experiments we could perform.

To get a feeling for what the principle means, imagine you are a primitive scientist. You notice that fruit hanging from trees tends to bob up and down in the wind, but the heavier fruits seems to bounce more slowly than the lighter fruits (for those readers who are physics students, I’m talking about a mass on a spring here). You decide to discover the law that relates the frequency of bobbing motion to the mass of the fruit. You fill a sack with some pebbles (carefully chosen to all have the same weight) and hang it from a tree branch. You can measure the mass of the sack by counting the number of pebbles in it, but you still need a way to measure the frequency of the bobbing. Nearby you hear the sound of water dripping from a leaf into a pond. You decide to measure the frequency by how many times the sack bobs up and down in between drips of water. Now you are ready to do your experiment.

You measure the bobbing frequency of the sack for many different masses, and record the results by drawing in the dirt with a stick. After analysing your data, you discover that the frequency f (in oscillations per water drop) is related to the mass m (in pebbles) by a simple formula:

where k stands for a particular number, say 16.8. But what does this number really mean?

Unbeknownst to you, a clever monkey was watching you from the bushes while you did the experiment. After you retire to your cave to sleep, the monkey comes out to play a trick on you. He carefully replaces each one of your pebbles with a heavier pebble of the same size and appearance, and makes sure that all of the heavier pebbles are the same weight as each other. He takes away the original pebbles and hides them. The next day, you repeat the experiment in exactly the same way, but now you discover that the constant k has changed from yesterday’s value of 16.8 to the new value of 11.2. Does this mean that the law of nature that governs the bobbing of things hanging from the tree has changed overnight? Or should you decide that the law is the same, but that the units that you used to measure frequency and mass have changed?

You decide to apply Bridgman’s Principle. The principle says that if (say) all the masses in the experiment were changed by the same proportion, then the laws of physics would not allow us to see any difference, provided we used the same measuring units. Since you do see a difference, Bridgman’s Principle says that it must be the units (and not the law itself) that has changed. `These must be different pebbles’ you say to yourself, and you mark them by scratching an X onto them. You go out looking for some other pebbles and eventually you find a new set of pebbles which give you the right value of 16.8 when you perform the experiment. `These must be the same kind of pebbles that I used in the original experiment’ you say to yourself, and you scratch an O on them so that you won’t lose them again. Ha! You have outsmarted the monkey.

Notice that as long as you use the right value for k — which depends on whether you measure the mass using X or O pebbles — then the abstract equation (1) remains true. In physics language, you are interpreting k as a dimensional constant, having the dimensions of  frequency times √mass. This means that if you use different units for measuring frequency or mass, the numerical value of k has to change in order to preserve the law. Notice also that the dimensions of k are chosen so that equation (1) has the same dimensions on each side of the equals sign. This is called a dimensionally homogeneous equation. Bridgman’s Principle can be rephrased as saying that all physical laws must be described by dimensionally homogeneous equations.

Bridgman’s Principle is useful because it allows us to start with a law expressed in particular units, in this case `oscillations per water-drop’ and `O-pebbles’, and then infer that the law holds for any units. Even though the numerical value of k changes when we change units, it remains the same in any fixed choice of units, so it represents a physical constant of nature.

The alternative is to insist that our units are the same as before (the pebbles look identical after all). That means that the change in k implies a change in the law itself, for instance, it implies that the same mass hanging from the tree today will bob up and down more slowly than it did yesterday. In our example, it turns out that Bridgman’s Principle leads us to the correct conclusion: that some tricky monkey must have switched our pebbles. But can the principle ever fail? What if physical laws really do change?

Suppose that after returning to your cave, the tricky monkey decides to have another go at fooling you. He climbs up the tree and whispers into its leaves: `Do you know why that primitive scientist is always hanging things from your branch? She is testing how strong you are! Make your branches as stiff and strong as you can tomorrow, and she will reward you with water from the pond’.

The next day, you perform the experiment a third time — being sure to use your `O-pebbles’ this time — and you discover again that the value of k seems to have changed. It now takes many more pebbles to achieve a given frequency than it did on the first day. Using Bridgman’s Principle, you again decide that something must be wrong with your measuring units. Maybe this time it is the dripping water that is wrong and needs to be adjusted, or maybe you have confidence in the regularity of the water drip and conclude that the `O-pebbles’ have somehow become too light. Perhaps, you conjecture, they were replaced by the tricky monkey again? So you throw them out and go searching for some heavier pebbles. You find some that give you the right value of k=16.8, and conclude that these are the real `O-pebbles’.

The difference is that this time, you were tricked! In fact the pebbles you threw out were the real `O-pebbles’. The change in k came from the background conditions of the experiment, namely the stiffness in the tree branches, which you did not consider as a physical variable. Hence, in a sense, the law that relates bobbing frequency to mass (for this tree) has indeed changed [3].

You thought that the change in the constant k was caused by using the wrong measuring units, but in fact it was due to a change in the physical constant k itself. This is an example of a scenario where a physical constant turns out not to be constant after all. If we simply assume Bridgman’s Principle to be true without carefully checking whether it is justified, then it is harder to discover situations in which the physical constants themselves are changing. So, Bridgman’s Principle can be thought of as the assumption that the values of physical constants (expressed in some fixed units) don’t change over time. If we are sure that the laws of physics are constant, then we can use the Principle to detect changes or inaccuracies in our measuring devices that define the physical units — i.e. we can leverage the laws of physics to improve the accuracy of our measuring devices.

We can’t always trust our measuring units, but the monkey also showed us that we can’t always trust the laws of physics. After all, scientific progress depends on occasionally throwing out old laws and replacing them with more accurate ones. In our example, a new law that includes the tree-branch stiffness as a variable would be the obvious next step.

One of the more artistic aspects of the scientific method is knowing when to trust your measuring devices, and when to trust the laws of physics [4]. Progress is made by `bootstrapping’ from one to the other: first we trust our units and use them to discover a physical law, and then we trust in the physical law and use it to define better units, and so on. It sounds like a circular process, but actually it represents the gradual refinement of knowledge, through increasingly smaller adjustments from different angles. Imagine trying to balance a scale by placing handfuls of sand on each side. At first you just dump about a handful on each side and see which is heavier. Then you add a smaller amount to the lighter side until it becomes heavier. Then you add an even smaller amount to the other side until it becomes heavier, and so on, until the scale is almost perfectly balanced. In a similar way, switching back and forth between physical laws and measurement units actually results in both the laws and measuring instruments becoming more accurate over time.

______________

[1] It is a shame that Craig’s work remains incomplete, because I think physicists could benefit from a re-examination of the principles of dimensional analysis. Simplified dimensional arguments are sometimes invoked in the literature on quantum gravity without due consideration for their meaning.

[2] Clowns have several birthdays a week, but they aren’t allowed to get drunk at them, which kind of defeats the purpose if you ask me.

[3] If you are uncomfortable with treating the branch stiffness as part of the physical law, imagine instead that the strength of gravity actually becomes weaker overnight.

[4] This is related to a deep result in the philosophy of science called the Duhem-Quine Thesis.
Quoth Duhem: `If the predicted phenomenon is not produced, not only is the questioned proposition put into doubt, but also the whole theoretical scaffolding used by the physicist’.

# Bootstrapping to quantum gravity

“If … there were no solid bodies in nature there would be no geometry.”
-Poincaré

A while ago, I discussed the mystery of why matter should be the source of gravity. To date, this remains simply an empirical fact. The deep insight of general relativity – that gravity is the geometry of space and time – only provides us with a modern twist: why should matter dictate the geometry of space-time?

There is a possible answer, but it requires us to understand space-time in a different way: as an abstraction that is derived from the properties of matter itself. Under this interpretation, it is perfectly natural that matter should affect space-time geometry, because space-time is not simply a stage against which matter dances, but is fundamentally dependent on matter for its existence. I will elaborate on this idea and explain how it leads to a new avenue of approach to quantum gravity.

First consider what we mean when we talk about space and time. We can judge how far away a train is by listening to the tracks, or gauge how deep a well is by dropping a stone in and waiting to hear the echo. We can tell a mountain is far away just by looking at it, and that the cat is nearby by tripping over it. In all these examples, an interaction is necessary between myself and the object, sometimes through an intermediary (the light reflected off the mountain into my eyes) and sometimes not (tripping over the cat). Things can also be far away in time. I obviously cannot interact with people who lived in the past (unless I have a time machine), or people who have yet to be born, even if they stood (or will stand) exactly where I am standing now. I cannot easily talk to my father when he was my age, but I can almost do it, just by talking to him now and asking him to remember his past self. When we say that something is far away in either space or time, what we really mean is that it is hard to interact with, and this difficulty of interaction has certain universal qualities that we give the names `distance’ and `time’.
It is worth mentioning here, as an aside, that in a certain sense, the properties of `time’ can be reduced to properties of `distance’ alone. Consider, for instance, that most of our interactions can be reduced to measurements of distances of things from us, at a given time. To know the time, I invariably look at the distance the minute hand has traversed along its cycle on the face of my watch. Our clocks are just systems with `internal’ distances, and it is the varying correspondence of these `clock distances’ with the distances of other things that we call the `time’. Indeed, Julian Barbour has developed this idea into a whole research program in which dynamics is fundamentally spatial, called Shape Dynamics.

So, if distance and time is just a way of describing certain properties of matter, what is the thing we call space-time?

We now arrive at a crucial point that has been stressed by philosopher Harvey Brown: the rigid rods and clocks with which we claim to measure space-time do not really measure it, in the traditional sense of the word `measure’. A measurement implies an interaction, and to measure space-time would be to grant space-time the same status as a physical body that can be interacted with. (To be sure, this is exactly how many people do wish to interpret space-time; see for instance space-time substantivalism and ontological structural realism).

Brown writes:
“One of Bell’s professed aims in his 1976 paper on `How to teach relativity’ was to fend off `premature philosophizing about space and time’. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatio-temporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of space-time — Galilean or Minkowskian, say — it is immersed in?” [1]

I claim that rods and clocks do not measure space-time, they embody space-time. Space-time is an idealized description of how material rods and clocks interact with other matter. This distinction is important because it has implications for quantum gravity. If we adopt the more popular view that space-time is an independently existing ontological construct, it stands to reason that, like other classical fields, we should attempt to directly quantise the space-time field. This is the approach adopted in Loop Quantum Gravity and extolled by Rovelli:

“Physical reality is now described as a complex interacting ensemble of entities (fields), the location of which is only meaningful with respect to one another. The relation among dynamical entities of being contiguous … is the foundation of the space-time structure. Among these various entities, there is one, the gravitational field, which interacts with every other one and thus determines the relative motion of the individual components of every object we want to use as rod or clock. Because of that, it admits a metrical interpretation.” [2]

One of the advantages of this point of view is that it dissolves some seemingly paradoxical features of general relativity, such as the fact that geometry can exist without (non-gravitational) matter, or the fact that geometry can carry energy and momentum. Since gravity is a field in its own right, it doesn’t depend on the other fields for its existence, nor is there any problem with it being able to carry energy. On the other hand, this point of view tempts us into framing quantum gravity as the mathematical problem of quantising the gravitational field. This, I think, is misguided.

I propose instead to return to a more Machian viewpoint, according to which space-time is contingent on (and not independent of) the existence of matter. Now the description of quantum space-time should follow, in principle, from an appropriate description of quantum matter, i.e. of quantum rods and clocks. From this perspective, the challenge of quantum gravity is to rebuild space-time from the ground up — to carry out Einstein’s revolution a second time over, but using quantum material as the building blocks.

My view about space-time can be seen as a kind of `pulling oneself up by one’s bootstraps’, or a Wittgenstein’s ladder (in which one climbs to the top of a ladder and then throws the ladder away). It works like this:
Step 1: define the properties of space-time according to the behaviour of rods and clocks.
Step 2: look for universal patterns or symmetries among these rods and clocks.
Step 3: take the ideal form of this symmetry and promote it to an independently existing object called `space-time’.
Step 4: Having liberated space-time from the material objects from which it was conceived, use it as the independent standard against which to compare rods and clocks.

Seen in this light, the idea of judging a rod or a clock by its ability to measure space or time is a convenient illusion: in fact we are testing real rods and clocks against what is essentially an embodiment of their own Platonic ideals, which are in turn conceived as the forms which give the laws of physics their most elegant expression. A pertinent example, much used by Julian Barbour, is Ephemeris time and the notion of a `good clock’. First, by using material bodies like pendulums and planets to serve as clocks, we find that the motions of material bodies approximately conform to Newton’s laws of mechanics and gravitation. We then make a metaphysical leap and declare the laws to be exactly true, and the inaccuracies to be due to imperfections in the clocks used to collect the data. This leads to the definition of the `Ephemeris time’, the time relative to which the planetary motions conform most closely to Newton’s laws, and a `good clock’ is then defined to be a clock whose time is closest to Ephemeris time.

The same thing happens in making the leap to special relativity. Einstein observed that, in light of Maxwell’s theory of electromagnetism, the empirical law of the relativity of motion seemed to have only a limited validity in nature. That is, assuming no changes to the behaviour of rods and clocks used to make measurements, it would not be possible to establish the law of the relativity of motion for electrodynamic bodies. Einstein made a metaphysical leap: he decided to upgrade this law to the universal Principle of Relativity, and to interpret its apparent inapplicability to electromagnetism as the failure of the rods and clocks used to test its validity. By constructing new rods and clocks that incorporated electromagnetism in the form of hypothetical light beams bouncing between mirrors, Einstein rebuilt space-time so as to give the laws of physics a more elegant form, in which the Relativity Principle is valid in the same regime as Maxwell’s equations.

By now, you can guess how I will interpret the step to general relativity. Empirical observations seem to suggest a (local) equivalence between a uniformly accelerated lab and a stationary lab in a gravitational field. However, as long as we consider `ideal’ clocks to conform to flat Minkowski space-time, we have to regard the time-dilated clocks of a gravitationally affected observer as being faulty. The empirical fact that observers stationary in a gravitational field cannot distinguish themselves (locally) from uniformly accelerated observers then seems accidental; there appears no reason why an observer could not locally detect the presence of gravity by comparing his normal clock to an `ideal clock’ that is somehow protected from gravity. On the other hand, if we raise this empirical indistinguishability to a matter of principle – the Einstein Equivalence Principle – we must conclude that time dilation should be incorporated into the very definition of an `ideal’ clock, and similarly with the gravitational effects on rods. Once the ideal rods and clocks are updated to include gravitational effects as part of their constitution (and not an interfering external force) they give rise to a geometry that is curved. Most magically of all, if we choose the simplest way to couple this geometry to matter (the Einstein Field Equations), we find that there is no need for a gravitational force at all: bodies follow the paths dictated by gravity simply because these are now the inertial paths followed by freely moving bodies in the curved space-time. Thus, gravity can be entirely replaced by geometry of space-time.

As we can see from the above examples, each revolution in our idea of space-time was achieved by reconsidering the nature of rods and clocks, so as to make the laws of physics take a more elegant form by incorporating some new physical principle (eg. the Relativity and Equivalence principles). What is remarkable is that this method does not require us to go all the way back to the fundamental properties of matter, prior to space-time, and derive everything again from scratch (the constructive theory approach). Instead, we can start from a previously existing conception of space-time and then upgrade it by modifying its primary elements (rods and clocks) to incorporate some new principle as part of physical law (the principle theory approach). The question is, will quantum gravity let us get away with the same trick?

I’m betting that it will. The challenge is to identify the empirical principle (or principles) that embody quantum mechanics, and upgrade them to universal principles by incorporating them into the very conception of the rods and clocks out of which general relativistic space-time is made. The result will be, hopefully, a picture of quantum geometry that retains a clear operational interpretation. Perhaps even Percy Bridgman, who dismissed the Planck length as being of “no significance whatever” [3] due to its empirical inaccessibility, would approve.

[1] Brown, Physical Relativity, p8.
[2] Rovelli, `Halfway through the woods: contemporary research on space and time’, in The Cosmos of Science, p194.
[3] Bridgman, Dimensional Analysis, p101.

# Stop whining and accept these axioms.

One of the stated goals of quantum foundations is to find a set of intuitive physical principles, that can be stated in plain language, from which the essential structure of quantum mechanics can be derived.

So what exactly is wrong with the axioms proposed by Chiribella et. al. in arXiv:1011.6451 ? Loosely speaking, the principles state that information should be localised in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved. The axioms can all be explained using ordinary language, as demonstrated in the sister paper arXiv:1209.5533. They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories. And they all seem quite reasonable, so that it is easy to accept their truth. This is essential, because it means that the apparently counter intuitive behaviour of QM is directly derivable from intuitive principles, much as the counter intuitive aspects of special relativity follow as logical consequences of its two intuitive axioms, the constancy of the speed of light and the relativity principle. Given these features, maybe we can finally say that quantum mechanics makes sense: it is the only way that the laws of physics can lead to a sensible model of information storage and communication!

Let me run through the axioms briefly (note to the wise: I take the `causality’ axiom as implicit, and I’ve changed some of the names to make them sound nicer). I’ll assume the reader is familiar with the distinction between pure states and mixed states, but here is a brief summary. Roughly, a pure state describes a system about which you have maximum information, whereas a mixed state can be interpreted as uncertainty about which pure state the system is really in. Importantly, a pure state does not need to determine the outcomes to every measurement that could be performed on it: even though it contains maximal information about the state, it might only specify the probabilities of what will happen in any given experiment. This is what we mean when we say a theory is `probabilistic’.

First axiom (Distinguishability): if there is a mixed state, for which there is at least one pure state that it cannot possibly be with any probability, then the mixed state must be perfectly distinguishable from some other state (presumably, the aforementioned one). It is hard to imagine how this rule could fail: if I have a bag that contains either a spider or a fly with some probability, I should have no problem distinguishing it from a bag that contains a snake. On the other hand, I can’t so easily tell it apart from another bag that simply contains a fly (at least not in a single trial of the experiment).

Second axiom (Compression): If a system contains any redundant information or `extra space’, it should be possible to encode it in a smaller system such that the information can be perfectly retrieved. For example, suppose I have a badly edited book containing multiple copies of some pages, and a few blank pages at the end. I should be able to store all of the information written in the book in a much smaller book, without losing any information, just by removing the redundant copies and blank pages. Moreover, I should be able to recover the original book by copying pages and adding blank pages as needed. This seems like a pretty intuitive and essential feature of the way information is encoded in physical systems.

Third axiom (Locality of information): If I have a joint system (say, of two particles) that can be in one of two different states, then I should be able to distinguish the two different states over many trials, by performing only local measurements on each individual particle and using classical communication. For example, we allow the local measurements performed on one particle to depend on the outcomes of the local measurements on the other particle. On the other hand, we do not need to make use of any other shared resources (like a second set of correlated particles) in order to distinguish the states. I must admit, out of all the axioms, this one seems the hardest to justify intuitively. What indeed is so special about local operations and classical communication that it should be sufficient to tell different states apart? Why can’t we imagine a world in which the only way to distinguish two states of a joint system is to make use of some other joint system? But let us put this issue aside for the moment.

Fourth axiom (Locality of ignorance): If I have two particles in a joint state that is pure (i.e. I have maximal information about it) and if I measure one of them and find it in a pure state, the axiom states that the other particle must also be in a pure state. This makes sense: if I do a measurement on one subsystem of a pure state that results in still having maximal information about that subsystem, I should not lose any information about the other subsystems during the process. Learning new information about one part of a system should not make me more ignorant of the other parts.

So far, all of the axioms described above are satisfied by classical and quantum information theory. Therefore, at the very least, if any of these axioms do not seem intuitive, it is only because we have not sufficiently well developed our intuitions about classical physics, so it cannot really be taken as a fault of the axioms themselves (which is why I am not so concerned about the detailed justification for axiom 3). The interesting axiom is the last one, `purification’, which holds in quantum physics but not in probabilistic classical physics.

Fifth axiom (Conservation of information) [aka the purification postulate]: Every mixed state of a system can be obtained by starting with several systems in a joint pure state, and then discarding or ignoring all except for the system in question. Thus, the mixedness of any state can be interpreted as ignorance of some other correlated states. Furthermore, we require that the purification be essentially unique: all possible pure states of the total set of systems that do the job must be convertible into one another by reversible transformations.

As stated above, it is not so clear why this property should hold in the world. However, it makes more sense if we consider one of its consequences: every irreversible, probabilistic process can be obtained from a reversible process involving additional systems, which are then ignored. In the same way that statistical mechanics allows us to imagine that we could un-scramble an egg, if only we had complete information about its individual atoms and the power to re-arrange them, the purification postulate says that everything that occurs in nature can be un-done in principle, if we have sufficient resources and information. Another way of stating this is that the loss of information that occurs in a probabilistic process is only apparent: in principle the information is conserved somewhere in the universe and is never lost, even though we might not have direct access to it. The `missing information’ in a mixed state is never lost forever, but can always be accessed by some observer, at least in principle.

It is curious that probabilistic classical physics does not obey this property. Surely it seems reasonable to expect that one could construct a probabilistic classical theory in which information is ultimately conserved! In fact, if one attempts this, one arrives at a theory of deterministic classical physics. In such a theory, having maximal knowledge of a state (i.e. the state is pure) further implies that one can perfectly predict the outcome of any measurement on the state, but this means the theory is no longer probabilistic. Indeed, for a classical theory to be probabilistic in the sense that we have defined the term, it necessarily allows processes in which information is irretrievably lost, violating the spirit of the purification postulate.

In conclusion, I’d say this is pretty close to the mystical “Zing” that we were looking for: quantum mechanics is the only reasonable theory in which processes can be inherently probabilistic while at the same time conserving information.

# The Zen of the Quantum Omlette

[Quantum mechanics] is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature, all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about; it is just that simple.” [1]

— E. T. Jaynes

Note: this post is about foundational issues in quantum mechanics, which means it is rather long and may be boring to non-experts (not to mention a number of experts). I’ve tried to use simple language so that the adventurous layman can nevertheless still get the gist of it, if he or she is willing (hey, fortune favours the brave).

As I’ve said before, I think research on the foundations of quantum mechanics is important. One of the main goals of work on foundations (perhaps the main goal) is to find a set of physical principles that can be stated in common language, but can also be implemented mathematically to obtain the model that we call `quantum mechanics’.

Einstein was a big fan of starting with simple intuitive principles on which a more rigorous theory is based. The special and general theories of relativity are excellent examples. Both are based on the `Principle of Relativity’, which states (roughly) that motion between two systems is purely relative. We cannot say whether a given system is truly in motion or not; the only meaningful question is whether the system is moving relative to some other system. There is no absolute background space and time in which objects move or stand still, like actors on a stage. In fact there is no stage at all, only the mutual distances between the actors, as experienced by the actors themselves.

The way I have stated the principle is somewhat vague, but it has a clear philosophical intention which can be taken as inspiration for a more rigorous theory. Of particular interest is the identification of a concept that is argued to be meaningless or illusory — in this case the concept of an object having a well-defined motion independent of other objects. One could arrive at the Principle of Relativity by noticing an apparent conspiracy in the laws of nature, and then invoking the principle as a means of avoiding the conspiracy. If we believe that motion is absolute, then we should find it mighty strange that we can play a game of ping-pong on a speeding train, without getting stuck to the wall. Indeed, if it weren’t for the scenery flying past, how would we know we were traveling at all? And even then, as the phrasing suggests, could we not easily imagine that it is the scenery moving past us while we remain still? Why, then, should Nature take such pains to hide from us the fact that we are in motion? The answer is the Zen of relativity — Nature does not conceal our true motion from us, instead, there is no absolute motion to speak of.

A similar leap is made from the special to the general theory of relativity. If we think of gravity as being a field, just like the electromagnetic field, then we notice a very strange coincidence: the charge of an object in the gravitational field is exactly equal to its inertial mass. By contrast, a particle can have an electric charge completely unrelated to its inertia. Why this peculiar conspiracy between gravitational charge and inertial mass? Because, quoth Einstein, they are the same thing. This is essentially the `Principle of Equivalence’ on which Einstein’s theory of gravity is based.

These considerations tell us that to find the deep principles in quantum mechanics, we have to look for seemingly inexplicable coincidences that cry out for explanation. In this post, I’ll discuss one such possibility: the apparent equivalence of two conceptually distinct types of probabilistic behaviour, that due to ignorance and that due to objective uncertainty. The argument runs as follows. Loosely speaking, in classical physics, one does not seem to require any notion of objective randomness or inherent uncertainty. In particular, it is always possible to explain observations using a physical model that is ontologically within the bounds of classical theory and such that all observable properties of a system are determined with certainty. In this sense, any uncertainty arising in classical experiments can always be regarded as our ignorance of the true underlying state of affairs, and we can perfectly well conceive of a hypothetical perfect experiment in which there is no uncertainty about the outcomes.

This is not so easy to maintain in quantum mechanics: any attempt to conceive of an underlying reality without uncertainty seems to result in models of the world that violate dearly-held principles, like the idea that signals cannot propagate faster than light, and experimenters have free will. This has prompted many of us to allow some amount of `objective’ uncertainty into our picture of the world, where even the best conceivable experiments must have some uncertain outcomes. These outcomes are unknowable, even in principle, until the moment that we choose to measure them (and the very act of measurement renders certain other properties unknowable). The presence of these two kinds of randomness in physics — the subjective randomness, which can always be removed by some hypothetical improved experiment, and the objective kind of randomness, which cannot be so removed — leads us into another dilemma, namely, where is the boundary that separates these two kinds of uncertainty?

Now at last we come to the `omelette’ that badass statistician and physicist E.T. Jaynes describes in the opening quote. Since quantum systems are inherently uncertain objects, how do we know how much of that uncertainty is due to our own ignorance, and how much of it is really `inside’ the system itself? Views range from the extreme subjective Bayesian (all uncertainty is ignorance) to various other extremes like the many-worlds interpretation (in which, arguably, the opposite holds: all uncertainty is objective). But a number of researchers, particularly those in the quantum information community, opt for a more Zen-like answer: the reason we can’t tell the difference between objective and subjective probability is that there is no difference. Asking whether the quantum state describes my personal ignorance about something, or whether the state “really is” uncertain, is a meaningless question. But can we take this Zen principle and turn it into something concrete, like the Relativity principle, or are we just by semantics avoiding the problem?

I think there might be something to be gained from taking this idea seriously and seeing where it leads. One way of doing this is to show that the predictions of quantum mechanics can be derived by taking this principle as an axiom. In this paper by Chiribella et. al., the authors use the “Purification postulate”, plus some other axioms, to derive quantum theory. What is the Purification postulate? It states that “the ignorance about a part is always compatible with a maximal knowledge of the whole”. Or, in my own words, the subjective ignorance of one system about another system can always be regarded as the objective uncertainty inherent in the state that encompasses both.

There is an important side comment to make before examining this idea further. You’ll notice that I have not restricted my usage of the word `ignorance’ to human experimenters, but that I take it to apply to any physical system. This idea also appears in relativity, where an “observer in motion” can refer to any object in motion, not necessarily a human. Similarly, I am adopting here the viewpoint of the information theorists, which says that two correlated or interacting systems can be thought of as having information about each other, and the quantification of this knowledge entails that systems — not just people — can be ignorant of each other in some sense. This is important because I think that an overly subjective view of probabilities runs the risk of concealing important physics behind the definition of the `rational agent’, which to me is a rather nebulous concept. I prefer to take the route of Rovelli and make no distinction between agents and generic physical systems. I think this view fits quite naturally with the Purification postulate.

In the paper by Chiribella et. al., the postulate is given a rigorous form and used to derive quantum theory. This alone is not quite enough, but it is, I think, very compelling. To establish the postulate as a physical principle, more work needs to be done on the philosophical side. I will continue to use Rovelli’s relational interpretation of quantum mechanics as an integral part of this philosophy (for a very readable primer, I suggest his FQXi essay).

In the context of this interpretation, the Purification postulate makes more sense. Conceptually, the quantum state does not represent information about a system in isolation, but rather it represents information about a system relative to another system. It is as meaningless to talk about the quantum state of an isolated system as it is to talk about space-time without matter (i.e. Mach’s principle [2]). The only meaningful quantities are relational quantities, and in this spirit we consider the separation of uncertainty into subjective and objective parts to be relational and not fundamental. Can we make this idea more precise? Perhaps we can, by associating subjective and objective uncertainty with some more concrete physical concepts. I’ll probably do that in a follow up post.

I conclude by noting that there are other aspects of quantum theory that cry out for explanation. If hidden variable accounts of quantum mechanics imply elements of reality that move faster than light, why does Nature conspire to prevent us using them for sending signals faster than light? And since the requirement of no faster-than-light signalling still allows correlations that are stronger than entanglement, why does entanglement stop short of that limit? I think there is still a lot that could be done in trying to turn these curious observations into physical principles, and then trying to build models based on them.

# Why Quantum Gravity needs Operationalism: Part 2

(Update: My colleagues pointed out that Wittgenstein was one of the greatest philosophers of the 20th century and I should not make fun of him, and anyway he was only very loosely associated with the Vienna circle. All well and true — but he was at least partly responsible for the idea that got the Vienna Circle onto Verificationism, and all of you pedants can go look at the references if you don’t believe me.)

“Where neither confirmation nor refutation is possible, science is not concerned.”    — Mach

Some physicists give philosophy a bad rap. I like to remind them that all the great figures in physics had a keen interest in philosophy, and were strongly influenced by the work of philosophers. Einstein made contributions to philosophy as well as physics, as did Ernst Mach, whose philosophical work had a strong influence on Einstein in formulating his General Theory of Relativity. In his own attitude to philosophy, Einstein was a self-described “epistemological opportunist” [1]. (Epistemology is, broadly speaking, the philosophy of knowledge and how it is acquired.) But philosophy sometimes gets in the way of progress, as explained in the following story.

A physicist was skipping along one day when he came upon a philosopher, standing rigid in the forest. “Why standeth you thus?” he inquired.

“I am troubled by a paradox!” said the philosopher. “How is it that things can move from place to place?”

“What do you mean? I moved here by skipping, didn’t I?”

“Yes, sure. But I cannot logically explain why the world allows it to be so. You see, a philosopher named Zeno argued that in order to traverse any finite distance, one would have to first traverse an infinite number of partitions of that distance. But how can one make sense of completing an infinite number of tasks in a finite amount of time?”

“Well dang,” said the physicist “that’s an interesting question. But wait! Could it be that space and time are actually divided up into a finite number of tiny chunks that cannot be sub-divided further? What an idea!”

“Ah! Perhaps,” says the philosopher, “but what if the world is indeed a continuum? Then we are truly stuck.”

At that moment, a mathematician who had been dozing in a tree fell out and landed with a great commotion.

“Terribly sorry! Couldn’t help but overhear,” he said. “In fact I do believe it is conceptually possible for an infinite number of things to add up to a finite quantity. Why, this gives me a great idea for calculating the area under curves. Thank you so much, I’d better get to it!”

“Yes, yes we must dash at once! There’s work to do!” agreed the physicist.

“But wait!” cried the philosopher, “what if time is merely an illusion? And what is the connection of abstract mathematics to the physical world? We have to work that out first!”

But the other two had already disappeared, leaving the philosopher in his forest to ponder his way down deeper and ever more complex rabbit-holes of thought.

***

Philosophy is valuable for pointing us in the right direction and helping us to think clearly. Sometimes philosophy can reveal a problem where nobody thought there was one, and this can lead to a new insight. Sometimes philosophy can identify and cure fallacies in reasoning. In solving a problem, it can highlight alternative solutions that might not have been noticed otherwise. But ultimately, physicists only tend to turn to philosophy when they have run out of ideas, and most of the time the connection of philosophy to practical matters seems tenuous at best. If philosophers have a weakness, it is only that they tend to think too much, whereas a physicist only thinks as hard as he needs to in order to get results.

After that brief detour, we are ready to return to our hero — physicist Percy Bridgman — and witness his own personal fling and falling-out with philosophy. In a previous post, we introduced Bridgman’s idea of operationalism. Recall that Bridgman emphasized that a physical quantity such as `length’ or `temperature’ should always be attached to some clear notion of how to measure that quantity in an experiment. It is not much of a leap from there to say that a concept is only meaningful if it comes equipped with instructions of how to measure it physically.

Although Bridgman was a physicist, his idea quickly caught on amongst philosophers, who saw in it the potential for a more general theory of meaning. But Bridgman quickly became disillusioned with the direction the philosophers were taking as it became increasingly clear that operationalism could not stand up to the demanding expectations set by the philosophers.

The main culprits were a group of philosophers called the Vienna Circle [2]. Following an idea of Ludwig Wittgenstein, these philosophers attempted to define concepts as meaningful only if they could somehow be verified in principle, an approach that became known as Verificationism. Verificationism was a major theme of the school of thought called `logical empiricism’ (aka logical positivism), the variants of which are embodied in the combined work of philosophers in the Vienna Circle, notably Reichenbach, Carnap and Schlick, as well as members outside the group, like the Berlin Society.

At that time, Bridgman’s operationalism was closely paralleled by the ideas of the Verificationists. This was unfortunate because around the middle of the 20th century it became increasingly apparent that there were big philosophical problems with this idea. On the physics side of things, the philosophers realized that there could be meaningful concepts that could not be directly verified. Einstein pointed out that we cannot measure the electric field inside a solid body, yet it is still meaningful to define the field at all points in space:

“We find that such an electrical continuum is always applicable only for the representation of electrical states of affairs in the interior of ponderable bodies. Here too we define the vector of electric field strength as the vector of the mechanical force exerted on the unit of positive electric quantity inside a body. But the force so defined is no longer directly accessible to experiments. It is one part of a theoretical construction that can be correct or false, i.e., consistent or not consistent with experience, only as a whole.” [1]

Incidentally, Einstein got this point of view from a philosopher, Duhem, who argued that isolated parts of a theory are do not stand as meaningful on their own, but only when taken together as a whole can they be matched with empirical data. It therefore does not always make sense to isolate  some apparently metaphysical aspect of a theory and criticize it as not being verifiable. In a sense, the verifiability of an abstract quantity like the electric field hinges on its placement within a larger theoretical framework that extends to the devices used to measure the field.

In addition, the Verificationists began to fall apart over some rather technical philosophical points. It went something like this:

Wittgenstein: “A proposition is meaningful if and only if it is conceivable for the proposition to be completely verified!”

Others: “What about the statement `All dogs are brown’? I can’t very well check that all dogs are brown can I? Most of the dogs who ever lived are long dead, for a start.”

Wittgenstein: “Err…”

Others: “And what about this guy Karl Popper? He says nothing can ever be completely verified. Our theories are always wrong, they just get less wrong with time.”

Wittgenstein: *cough* *cough* I have to go now. (runs away).

Carnap: Look, we don’t have to take such a hard line. Statements like `All dogs are brown’ are still meaningful, even though they can’t be completely verified.

Schlick: No, no, you’ve got it wrong! Statements like `All dogs are brown’ are meaningless! They simply serve to guide us towards other statements that do have meaning.

Quine: No, you guys are missing a much worse problem with your definition: how do you determine which statements actually require verification (like `The cat sat on the mat’), and which ones are just true by definition (`All bachelors are unmarried’)? I can show that there is no consistent way to separate the two kinds of statement.

So you can see how the philosophers tend to get carried away. And where was poor old Percy Bridgman during all this? He was backed into a corner, with people prodding his chest and shouting at him:

Gillies: “How do you tell if a measurement method is valid? If there is nothing more to a concept than its method of measurement, then every method of measurement is automatically valid!”

Bridgman: “Well, yes, I suppose…”

Positivists: “And isn’t it true that even if we all agree to use a single measurement of length, this does not come close to exhausting what we mean by the word length? How disappointing.”

Bridgman: “Now wait a minute –”

Margenau: “And just what the deuce do you mean by `operations’ anyhow?”

Bridgman: “Well, I … hey, aren’t you a physicist? You should be on my side!”

(Margenau discreetly melts into the crowd)

To cut a long story short, by the time Quine was stomping on the ashes of what once was logical empiricism, Bridgman’s operationalism had suffered a similar fate, leaving Bridgman battered and bloody on the sidelines wondering where he went wrong:

“To me now it seems incomprehensible that I should ever have thought it within my powers … to analyze so thoroughly the functioning of our thinking apparatus that I could confidently expect to exhaust the subject and eliminate the possibility of a bright new idea against which I would be defenseless.”

To console himself, Bridgman retreated to his laboratory where he at least knew what things were, and could spend hours hand-drilling holes in blocks of steel without having to waste his time arguing about it. Sometimes the positivists would prod him, saying:

“Bridgman! Hey Bridgman! If I measure the height of the Eiffel tower, does that count as an operation, or do you have to perform every experiment yourself?” to which Bridgman would narrow his eyes and mutter: “I don’t trust any experimental results except the ones I perform myself. Now leave me alone!”

Needless to say, Bridgman’s defiantly anti-social attitude to science did not help improve the standing of operationalism among philosophers or physicists; few people were prepared to agree that every experiment has to be verified by an individual for him or herself. Nevertheless, Bridgman remained a heroic figure and a defender of the scientific method as the best way to cope with an otherwise incomprehensible and overwhelming universe. Bridgman’s stubborn attitude of self-reliance was powerfully displayed in his final act: he committed suicide by gunshot wound after being diagnosed with metastatic cancer. In his suicide note, he wrote [3]:

“It isn’t decent for society to make a man do this thing himself. Probably this is the last day I will be able to do it myself.”

Bridgman’s original conception of operationalism continues to resonate with physicists to this very day. In the end he was forced to admit that it did not constitute a rigorous philosophical doctrine of meaning, and he retracted some of his initially over-optimistic statements. However, he never gave up the more pragmatic point of view that an operationalist attitude can be beneficial to the practicing scientist. Towards the end of his life, he maintained that:

“…[T]here is nothing absolute or final about an operational analysis […]. So far as any dogma is involved here at all, it is merely the conviction that it is better, because it takes us further, to analyze into doings or happenings rather than into objects or entities.”

[1]  See the SEP entry on Einstein’s philosophy: http://plato.stanford.edu/entries/einstein-philscience/

[2] SEP entry on the Vienna Circle: http://plato.stanford.edu/entries/vienna-circle/

[3] Sherwin B Nuland, “How We Die: Reflections on Life’s Final Chapter” Random House 1995

# Why quantum gravity needs operationalism: Part 1

This is the first of a series of posts in which I will argue that physicists can gain insight into the puzzles of quantum gravity if we adopt a philosophy I call operationalism. The traditional interpretation of operationalism by philosophers was found to be lacking in several important ways, so the concept will have to be updated to a modern context if we are to make use of it, and its new strengths and limitations will need to be clarified. The goal of this first post is to introduce you to operationalism as it was originally conceived and as I understand it. Later posts will explain the areas in which it failed as a philosophical doctrine, and why it might nevertheless succeed as a tool in theoretical physics, particularly in regard to quantum gravity [1].

Operationalism started with Percy Williams Bridgman. Bridgman was a physicist working in the early 20th century, at the time when the world of physics was being shaken by the twin revolutions of relativity and quantum mechanics. Einstein’s hand was behind both revolutions: first through the publication of his theory of General Relativity in 1916, and second for explaining the photoelectric effect using things called quanta, which earned him the Nobel prize in 1921. This upheaval was a formative time for Bridgman, who was especially struck by Einstein’s clever use of thought experiments to derive special relativity.

Einstein had realized that there was a problem with the concept of `simultaneity’. Until then, everybody had taken it for granted that if two events are simultaneous, then they occur at the same time no matter who is observing them. But Einstein asked the crucial question: how does a person know that two events happened at the same time? To answer it, he had to adopt an operational definition of simultaneity: an observer traveling at constant velocity will consider two equidistant events to be simultaneous if beams of light emitted from each event reach the location of the observer at the same time, as measured by the observer’s clock (this definition can be further generalised to apply to any pair of events as seen by an observer in arbitrary motion).

From this, one can deduce that the relativity principle implies the relativity of simultaneity: two events that are simultaneous for one observer may not be simultaneous for another observer in relative motion. This is one of the key observations of special relativity. Bridgman noticed that Einstein’s deep insight relied upon taking an abstract concept, in this case simultaneity, and grounding it in the physical world by asking `what sort of operations must be carried out in order to measure this thing’?

For his own part, Bridgman was a brilliant experimentalist who won the Nobel prize in 1946 for his pioneering work on creating extremely high pressures in his laboratory. Using state-of-the-art technology, he created pressures up to 100,000 atmospheres, nearly 100 times greater than anybody before him, and then did what any good scientist would do: he put various things into his pressure chamber to record what happened to them. Mostly, as you might expect, they got squished. At pressures beyond 25,000 atmospheres, steel can be molded like play-dough; at 50,000 atmospheres all normal liquids have frozen solid. (Of course, Bridgman’s vessel had to be very small to withstand such pressure, which limited the things he could put in it). But Bridgman faced a unique problem: the pressures that he created were so high that he couldn’t use any standard pressure gauge to measure the pressures in his lab because the gauge would basically get squished like everything else. The situation is the same as trying to measure the temperature of the sun using a regular thermometer: it would explode and vaporize before you could even take a proper reading. Consequently, Bridgman had no scientific way to tell between `really high pressure’ and `really freaking high pressure’, so he was forced to design completely new ways of measuring pressure in his laboratory, such as looking at the phase transition of the element Bismuth and the resistivity of the alloy Manganin [2]. This led him to wonder: what does a concept like `pressureor `temperature’ really mean in the absence of a measuring technique?

Bridgman proposed that quantities measured by different operations should always be regarded as being fundamentally different, even though they may coincide in certain situations. This led to a minor problem in the definitions of quantities. The temperature of a cup of water is measured by sticking a thermometer in it. The temperature of the sun is measured by looking at the spectrum of radiation emitted from it. If these quantities are measured by such different methods in different regimes, why do we call them both `temperature’? In what sense are our operations measuring the same thing? The solution, according to Bridgman, is that there is a regime in between the two in which both methods of measuring temperature are valid – and in this regime the two measurements must agree. The temperature of molten gold could potentially be measured by the right kind of thermometer, as well as by looking at its radiation spectrum, and both of these methods will give the same temperature. This allows us to connect the concept of temperature on the sun to temperature in your kitchen and call them by the same name.

This method of `patching together’ different ways of measuring the same quantity is reminiscent of placing co-ordinate patches on manifolds in mathematical physics. In general, there is no way to cover an entire manifold (representing space-time for example) with a single set of co-ordinates that are valid everywhere. But we can cover different parts of the manifold in patches, provided that the co-ordinates agree in the areas where they overlap. The key insight is that there is no observer who can see all of space-time at once – any physical observer has to travel from one part of the manifold to another by a continuous route. Hence it does not matter if the observer cannot describe the entire manifold by a single map, so long as they have a series of maps that smoothly translate into one another as they travel along their chosen path – even if the maps used much later in the journey have no connection or overlap with the maps used early in the journey. Similarly, as we extend our measuring devices into new regimes, we must gradually replace them with new devices as we go. The eye is replaced with the microscope, the microscope with the electron microscope and the electron microscope with the particle accelerator, which now bears no resemblance to the eye, although they both gaze upon the same world.

Curiously, there was another man named Bridgman active around the same time, who is likely to be more familiar to artists: that is George Bridgman, author of Bridgman’s Complete Guide to Drawing From Life. Although they were two completely different Bridgmans, working in different disciplines, both of them were concerned with essentially the same problem: how to connect our internal conception of the world with the devices by which we measure the world. In the case of Percy Bridgman, it was a matter of connecting abstract physical quantities to their measurement devices, while George Bridgman aimed to connect the figure in the mind to the functions of the hands and eyes. We close with a quote from the artist:

“Indeed, it is very far from accurate to say that we see with our eyes. The eye is blind but for the idea behind the eye.”

[1] Everything I have written comes from Hasok Chang’s entry in the Stanford Encyclopedia of Philosophy on operationalism, which is both clearer and more thorough than my own ramblings.

[2] Readers interested in the finer points of Percy Bridgman’s work should see his Nobel prize lecture.