Tag Archives: Operationalism

A meditation on physical units: Part 2

[Preface: This is the second part of my discussion of this paper by Craig Holt. It has a few more equations than usual, so strap a seat-belt onto your brain and get ready!]

“Alright brain. You don’t like me, and I don’t like you, but let’s get through this thing and then I can continue killing you with beer.”    — Homer Simpson

Imagine a whale. We like to say that the whale is big. What does that mean? Well, if we measure the length of the whale, say by comparing it to a meter-stick, we will count up a very large number of meters. However, this only tells us that the whale is big in comparison to a meter-stick. It doesn’t seem to tell us anything about the intrinsic, absolute length of the whale. But what is the meaning of `intrinsic, absolute’ length?

Imagine the whale is floating in space in an empty universe. There are no planets, people, fish or meter-sticks to compare the whale to. Maybe we could say that the whale has the property of length, even though we have no way of actually measuring its length. That’s what `absolute’ length means. We can imagine that it has some actual number, independently of any standard for comparison like a meter-stick.

"Not again!"
“Oh no, not again!”

In Craig’s Holt’s paper, this distinction — between measured and absolute properties — is very important. All absolute quantities have primes (also called apostrophes), so the absolute length of a whale would be written as whale-length’ and the absolute length of a meter-stick is written meter’. The length of the whale that we measure, in meters, can be written as the ratio whale-length’ / meter’ . This ratio is something we can directly measure, so it doesn’t need a prime, we can just call it whale-length: it is the number of meter sticks that equal a whale-length. It is clear that if we were to change all of the absolute lengths in the universe by the same factor, then the absolute properties whale-length’ and meter’ would both change, but the measurable property of whale-length would not change.

Ok, so, you’re probably thinking that it is weird to talk about absolute quantities if we can’t directly measure them — but who says that you can’t directly measure absolute quantities? I only gave you one example where, as it turned out, we couldn’t measure the absolute length. But one example is not a general proof. When you go around saying things like “absolute quantities are meaningless and therefore changes in absolute quantities can’t be detected”, you are making a pretty big assumption. This assumption has a name, it is called Bridgman’s Principle (see the last blog post).

Bridgman’s Principle is the reason why at school they teach you to balance the units on both sides of an equation. For example, `speed’ is measured in units of length per time (no, not milligrams — this isn’t Breaking Bad). If we imagine that light has some intrinsic absolute speed c’, then to measure it we would need to have (for example) some reference length L’ and some reference time duration T’ and then see how many lengths of L’ the light travels in time T’. We would write this equation as:

eq1

where C is the speed that we actually measure. Bridgman’s Principle says that a measured quantity like C cannot tell us the absolute speed of light c’, it only tells us what the value of c’ is compared to the values of our measuring apparatus, L’ and T’ (for example, in meters per second). If there were some way that we could directly measure the absolute value of c’ without comparing it to a measuring rod and a clock, then we could just write c’ = C without needing to specify the units of C. So, without Bridgman’s Principle, all of Dimensional Analysis basically becomes pointless.

So why should Bridgman’s Principle be true in general? Scientists are usually lazy and just assume it is true because it works in so many cases (this is called “proof by induction”). After all, it is hard to find a way of measuring the absolute length of something, without referring to some other reference object like a meter-stick. But being a good scientist is all about being really tight-assed, so we want to know if Bridgman’s Principle can be proven to be watertight.

A neat example of a watertight principle is the Second Law of Thermodynamics. This Law was also originally an inductive principle (it seemed to be true in pretty much all thermodynamic experiments) but then Boltzmann came along with his famous H-Theorem and proved that it has to be true if matter is made up of atomic particles. This is called a constructive justification of the principle [1].

The H Theorem makes it nice and easy to judge whether some crackpot’s idea for a perpetual motion machine will actually run forever. You can just ask them: “Is your machine made out of atoms?” And if the answer is `yes’ (which it probably is), then you can point out that the H-Theorem proves that machines made up of atoms must obey the Second Law, end of story.

Coming up with a constructive proof, like the H-Theorem, is pretty hard. In the case of Bridgman’s Principle, there are just too many different things to account for. Objects can have numerous properties, like mass, charge, density, and so on; also there are many ways to measure each property. It is hard to imagine how we could cover all of these different cases with just a single theorem about atoms. Without the H-Theorem, we would have to look over the design of every perpetual motion machine, to find out where the design is flawed. We could call this method “proof by elimination of counterexamples”. This is exactly the procedure that Craig uses to lend support to Bridgman’s Principle in his paper.

To get a flavor for how he does it, recall our measurement of the speed of light from equation (1). Notice that the measured speed C does not have to be the same as the absolute speed c’. In fact we can rewrite the equation as:

eq2

and this makes it clear that the number C that we measure is not itself an absolute quantity, but rather is a comparison between the absolute speed of light c’ and the absolute distance L’ per time T’. What would happen if we changed all of the absolute lengths in the universe? Would this change the value of the measured speed of light C? At first glance, you might think that it would, as long as the other absolute quantities on the left hand side of equation (2) are independent of length. But if that were true, then we would be able to measure changes in absolute length by observing changes in the measurable speed of light C, and this would contradict Bridgman’s Principle!

To get around this, Craig points out that the length L’ and time T’ are not fundamental properties of things, but are actually reducible to the atomic properties of physical rods and clocks that we use to make measurements. Therefore, we should express L’ and T’ in terms of the more fundamental properties of matter, such as the masses of elementary particles and the coupling constants of forces inside the rods and clocks. In particular, he argues that the absolute length of any physical rod is equal to some number times the “Bohr radius” of a typical atom inside the rod. This radius is in turn proportional to:

eq3

where h’, c’ are the absolute values of Planck’s constant and the speed of light, respectively, and m’e is the absolute electron mass. Similarly, the time duration measured by an atomic clock is proportional to:

eq4

As a result, both the absolute length L’ and time T’ actually depend on the absolute constants c’, h’ and the electron mass m’e. Substituting these into the expression for the measured speed of light, we get:

eq5

where X,Y are some proportionality constants. So, the factors of c’ cancel and we are left with C=X/Y. The numbers X and Y depend on how we construct our rods and clocks — for instance, they depend on how many atoms are inside the rod, and what kind of atom we use inside our atomic clock. In fact, the definition of a `meter’ and a `second’ are specially chosen so as to make this ratio exactly C=299,792,458 [2].

Now that we have included the fact that our measuring rods and clocks are made out of matter, we see that in fact the left hand side of equation (5) is independent of any absolute quantities. Therefore changing the absolute length, time, mass, speed etc. cannot have any effect on the measured speed of light C, and Bridgman’s principle is safe — at least in this example.

(Some readers might wonder why making a clock heavier should also make it run faster, as seems to be suggested by equation (4). It is important to remember that the usual kinds of clocks we use, like wristwatches, are quite complicated things containing trillions of atoms. To calculate how the behaviour of all these atoms would change the ticking of the overall clock mechanism would be, to put it lightly, a giant pain in the ass. That’s why Craig only considers very simple devices like atomic clocks, whose behaviour is well understood at the atomic level [3].)

image credit: xetobyte – A Break in Reality

Another simple model of a clock is the light clock: a beam of light bouncing between two mirrors separated by a fixed distance L’. Since light has no mass, you might think that the frequency of such a clock should not change if we were to increase all absolute masses in the universe. But we saw in equation (4) that the frequency of an atomic clock is proportional to the electron mass, and so it would increase. It then seems like we could measure this increase in atomic clock frequency by comparing it to a light clock, whose frequency does not change — and then we would know that the absolute masses had changed. Is this another threat to Bridgman’s Principle?

The catch is that, as Craig points out, the length L’ between the mirrors of the light clock is determined by a measuring rod, and the rod’s length is inversely proportional to the electron mass as we saw in equation (1). So if we magically increase all the absolute masses, we would also cause the absolute length L’ to get smaller, which means the light-clock frequency would increase. In fact, it would increase by exactly the same amount as the atomic clock frequency, so comparing them would not show us any difference! Bridgman’s Principle is saved again.

Let’s do one more example, this time a little bit more extreme. According to Einstein’s theory of general relativity, every lump of mass has a Schwarzschild radius, which is the radius of a sphere such that if you crammed all of the mass into this sphere, it would turn into a black hole. Given some absolute amount of mass M’, its Schwarzschild radius is given by the equation:

eq6

where c’ is the absolute speed of light from before, and G’ is the absolute gravitational constant, which determines how strong the gravitational force is. Now, glancing at the equation, you might think that if we keep increasing all of the absolute masses in the universe, planets will start turning into black holes. For instance, the radius of Earth is about 6370 km. This is the Schwarzschild radius for a mass of about a million times Earth’s mass. So if we magically increased all absolute masses by a factor of a million, shouldn’t Earth collapse into a black hole? Then, moments before we all die horribly, we would at least know that the absolute mass has changed, and Bridgman’s Principle was wrong.

Of course, that is only true if changing the absolute mass doesn’t affect the other absolute quantities in equation (6). But as we now know, increasing the absolute mass will cause our measuring rods to shrink, and our clocks to run faster. So the question is, if we scale the masses by some factor X, do all the X‘s cancel out in equation (6)?

Well, since our absolute lengths have to shrink, the Schwarzschild radius should shrink, so if we multiply M’ by X, then we should divide the radius R’ by X. This doesn’t balance! Hold on though — we haven’t dealt with the constants c’ and G’ yet. What happens to them? In the case of c’, we have c’ = C L’ / T’. Since L’ and T’ both decrease by a factor of X (lengths and time intervals get shorter) there is no overall effect on the absolute speed of light c’.

How do we measure the quantity G’? Well, G’ tells us how much two masses (measured relative to a reference mass m’) will accelerate towards each other due to their gravitational attraction. Newton’s law of gravitation says:

eq7

where N is some number that we can measure, and it depends on how big the two masses are compared to the reference mass m’, how large the distance between them is compared to the reference length L’, and so forth. If we measure the acceleration a’ using the same reference length and time L’,T’, then we can write:

eq8

where the A is just the measured acceleration in these units. Putting this all together, we can re-arrange equation (7) to get:

eq9

and we can define G = (A/N) as the actually measured gravitational constant in the chosen units. From equation (9), we see that increasing M’ by a factor of X, and hence dividing each instance of L’ and T’ by X, implies that the absolute constant G’ will actually change: it will be divided by a factor of X2.

What is the physics behind all this math? It goes something like this: suppose we are measuring the attraction between two masses separated by some distance. If we increase the masses, then our measuring rods shrink and our clocks get faster. This means that when we measure the accelerations, the objects seem to accelerate faster than before. This is what we expect, because two masses should become more attractive (at the same distance) when they become more massive. However, the absolute distance between the masses also has to shrink. The net effect is that, after increasing all the absolute masses, we find that the masses are producing the exact same attractive force as before, only at a closer distance. This means the absolute attraction at the original distance is weaker — so G’ has become weaker after the absolute masses in the universe have been increased (notice, however, that the actually measured value G does not change).

Diagram of a Cavendish experiment for measuring gravity.

Returning now to equation (6), and multiplying M’ by X, dividing R’ by X and dividing G’ by X2, we find that all the extra factors cancel out. We conclude that increasing all the absolute masses in the universe by a factor of a million will not, in fact, cause Earth to turn into a black hole, because the effect is balanced out by the contingent changes in the absolute lengths and times of our measuring instruments. Whew!

Craig’s paper is long and very thorough. He compares a whole zoo of physical clocks, including electric clocks, light-clocks, freely falling inertial clocks, different kinds of atomic clocks and even gravitational clocks made from two orbiting planets. Not only does he generalize his claim to Newtonian mechanics, he covers general relativity as well, and the Dirac equation of quantum theory, including a discussion of Compton scattering (a photon reflecting off an electron). Besides all of this, he takes pains to discuss the meaning of coupling constants, the Planck scale, and the related but distinct concept of scale invariance. All in all, Craig’s paper just might be the most comprehensive justification for Bridgman’s principle so far in existence!

Most scientists might shrug and say “who needs it?”. In the same way, not many scientists care to examine perpetual motion machines to find out where the flaw lies. In this respect, Craig is a craftsman of the first order — he cares deeply about the details. Unlike the Second Law of Thermodynamics, Bridgman’s Principle seems rarely to have been challenged. This only makes Craig’s defense of it all the more important. After all, it is especially those beliefs which we are disinclined to question that are most deserving of a critical examination.

math

Footnotes:

[1] Some physical principles, like the Relativity Principle, have never been given a constructive justification. For this reason, Einstein himself seems to have regarded the Relativity Principle with some suspicion. See this great discussion by Brown and Pooley.

[2] Why not just set it to N=1? Well, no reason why not! Then we would replace the meter by the `light second’, and the second by the `light-meter’. And we would say things like “Today I walked 0.3 millionths of a light second to buy an ice-cream, and it took me just 130 billion light-meters to eat it!” So, you know, that would be a bit weird. But theorists do it all the time.

[3] To be perfectly strict, we cannot assume that a wristwatch will behave in the same way as an atomic clock in response to changes in absolute properties; we would have to derive their behavior constructively from their atomic description. This is exactly why a general constructive proof of Bridgman’s Principle would be so hard, and why Craig is forced to stick with simple models of clocks and rulers.

Bootstrapping to quantum gravity

Kepler

“If … there were no solid bodies in nature there would be no geometry.”
-Poincaré

A while ago, I discussed the mystery of why matter should be the source of gravity. To date, this remains simply an empirical fact. The deep insight of general relativity – that gravity is the geometry of space and time – only provides us with a modern twist: why should matter dictate the geometry of space-time?

There is a possible answer, but it requires us to understand space-time in a different way: as an abstraction that is derived from the properties of matter itself. Under this interpretation, it is perfectly natural that matter should affect space-time geometry, because space-time is not simply a stage against which matter dances, but is fundamentally dependent on matter for its existence. I will elaborate on this idea and explain how it leads to a new avenue of approach to quantum gravity.

First consider what we mean when we talk about space and time. We can judge how far away a train is by listening to the tracks, or gauge how deep a well is by dropping a stone in and waiting to hear the echo. We can tell a mountain is far away just by looking at it, and that the cat is nearby by tripping over it. In all these examples, an interaction is necessary between myself and the object, sometimes through an intermediary (the light reflected off the mountain into my eyes) and sometimes not (tripping over the cat). Things can also be far away in time. I obviously cannot interact with people who lived in the past (unless I have a time machine), or people who have yet to be born, even if they stood (or will stand) exactly where I am standing now. I cannot easily talk to my father when he was my age, but I can almost do it, just by talking to him now and asking him to remember his past self. When we say that something is far away in either space or time, what we really mean is that it is hard to interact with, and this difficulty of interaction has certain universal qualities that we give the names `distance’ and `time’.
It is worth mentioning here, as an aside, that in a certain sense, the properties of `time’ can be reduced to properties of `distance’ alone. Consider, for instance, that most of our interactions can be reduced to measurements of distances of things from us, at a given time. To know the time, I invariably look at the distance the minute hand has traversed along its cycle on the face of my watch. Our clocks are just systems with `internal’ distances, and it is the varying correspondence of these `clock distances’ with the distances of other things that we call the `time’. Indeed, Julian Barbour has developed this idea into a whole research program in which dynamics is fundamentally spatial, called Shape Dynamics.

Sigmund Freud Museum, Wien – Peter Kogler

So, if distance and time is just a way of describing certain properties of matter, what is the thing we call space-time?

We now arrive at a crucial point that has been stressed by philosopher Harvey Brown: the rigid rods and clocks with which we claim to measure space-time do not really measure it, in the traditional sense of the word `measure’. A measurement implies an interaction, and to measure space-time would be to grant space-time the same status as a physical body that can be interacted with. (To be sure, this is exactly how many people do wish to interpret space-time; see for instance space-time substantivalism and ontological structural realism).

Brown writes:
“One of Bell’s professed aims in his 1976 paper on `How to teach relativity’ was to fend off `premature philosophizing about space and time’. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatio-temporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of space-time — Galilean or Minkowskian, say — it is immersed in?” [1]

I claim that rods and clocks do not measure space-time, they embody space-time. Space-time is an idealized description of how material rods and clocks interact with other matter. This distinction is important because it has implications for quantum gravity. If we adopt the more popular view that space-time is an independently existing ontological construct, it stands to reason that, like other classical fields, we should attempt to directly quantise the space-time field. This is the approach adopted in Loop Quantum Gravity and extolled by Rovelli:

“Physical reality is now described as a complex interacting ensemble of entities (fields), the location of which is only meaningful with respect to one another. The relation among dynamical entities of being contiguous … is the foundation of the space-time structure. Among these various entities, there is one, the gravitational field, which interacts with every other one and thus determines the relative motion of the individual components of every object we want to use as rod or clock. Because of that, it admits a metrical interpretation.” [2]

One of the advantages of this point of view is that it dissolves some seemingly paradoxical features of general relativity, such as the fact that geometry can exist without (non-gravitational) matter, or the fact that geometry can carry energy and momentum. Since gravity is a field in its own right, it doesn’t depend on the other fields for its existence, nor is there any problem with it being able to carry energy. On the other hand, this point of view tempts us into framing quantum gravity as the mathematical problem of quantising the gravitational field. This, I think, is misguided.

I propose instead to return to a more Machian viewpoint, according to which space-time is contingent on (and not independent of) the existence of matter. Now the description of quantum space-time should follow, in principle, from an appropriate description of quantum matter, i.e. of quantum rods and clocks. From this perspective, the challenge of quantum gravity is to rebuild space-time from the ground up — to carry out Einstein’s revolution a second time over, but using quantum material as the building blocks.

Ernst Mach vs. Max Ernst. Get it right, folks.

My view about space-time can be seen as a kind of `pulling oneself up by one’s bootstraps’, or a Wittgenstein’s ladder (in which one climbs to the top of a ladder and then throws the ladder away). It works like this:
Step 1: define the properties of space-time according to the behaviour of rods and clocks.
Step 2: look for universal patterns or symmetries among these rods and clocks.
Step 3: take the ideal form of this symmetry and promote it to an independently existing object called `space-time’.
Step 4: Having liberated space-time from the material objects from which it was conceived, use it as the independent standard against which to compare rods and clocks.

Seen in this light, the idea of judging a rod or a clock by its ability to measure space or time is a convenient illusion: in fact we are testing real rods and clocks against what is essentially an embodiment of their own Platonic ideals, which are in turn conceived as the forms which give the laws of physics their most elegant expression. A pertinent example, much used by Julian Barbour, is Ephemeris time and the notion of a `good clock’. First, by using material bodies like pendulums and planets to serve as clocks, we find that the motions of material bodies approximately conform to Newton’s laws of mechanics and gravitation. We then make a metaphysical leap and declare the laws to be exactly true, and the inaccuracies to be due to imperfections in the clocks used to collect the data. This leads to the definition of the `Ephemeris time’, the time relative to which the planetary motions conform most closely to Newton’s laws, and a `good clock’ is then defined to be a clock whose time is closest to Ephemeris time.

The same thing happens in making the leap to special relativity. Einstein observed that, in light of Maxwell’s theory of electromagnetism, the empirical law of the relativity of motion seemed to have only a limited validity in nature. That is, assuming no changes to the behaviour of rods and clocks used to make measurements, it would not be possible to establish the law of the relativity of motion for electrodynamic bodies. Einstein made a metaphysical leap: he decided to upgrade this law to the universal Principle of Relativity, and to interpret its apparent inapplicability to electromagnetism as the failure of the rods and clocks used to test its validity. By constructing new rods and clocks that incorporated electromagnetism in the form of hypothetical light beams bouncing between mirrors, Einstein rebuilt space-time so as to give the laws of physics a more elegant form, in which the Relativity Principle is valid in the same regime as Maxwell’s equations.

Ladder for Booker T. Washington – Martin Puryear

By now, you can guess how I will interpret the step to general relativity. Empirical observations seem to suggest a (local) equivalence between a uniformly accelerated lab and a stationary lab in a gravitational field. However, as long as we consider `ideal’ clocks to conform to flat Minkowski space-time, we have to regard the time-dilated clocks of a gravitationally affected observer as being faulty. The empirical fact that observers stationary in a gravitational field cannot distinguish themselves (locally) from uniformly accelerated observers then seems accidental; there appears no reason why an observer could not locally detect the presence of gravity by comparing his normal clock to an `ideal clock’ that is somehow protected from gravity. On the other hand, if we raise this empirical indistinguishability to a matter of principle – the Einstein Equivalence Principle – we must conclude that time dilation should be incorporated into the very definition of an `ideal’ clock, and similarly with the gravitational effects on rods. Once the ideal rods and clocks are updated to include gravitational effects as part of their constitution (and not an interfering external force) they give rise to a geometry that is curved. Most magically of all, if we choose the simplest way to couple this geometry to matter (the Einstein Field Equations), we find that there is no need for a gravitational force at all: bodies follow the paths dictated by gravity simply because these are now the inertial paths followed by freely moving bodies in the curved space-time. Thus, gravity can be entirely replaced by geometry of space-time.

As we can see from the above examples, each revolution in our idea of space-time was achieved by reconsidering the nature of rods and clocks, so as to make the laws of physics take a more elegant form by incorporating some new physical principle (eg. the Relativity and Equivalence principles). What is remarkable is that this method does not require us to go all the way back to the fundamental properties of matter, prior to space-time, and derive everything again from scratch (the constructive theory approach). Instead, we can start from a previously existing conception of space-time and then upgrade it by modifying its primary elements (rods and clocks) to incorporate some new principle as part of physical law (the principle theory approach). The question is, will quantum gravity let us get away with the same trick?

I’m betting that it will. The challenge is to identify the empirical principle (or principles) that embody quantum mechanics, and upgrade them to universal principles by incorporating them into the very conception of the rods and clocks out of which general relativistic space-time is made. The result will be, hopefully, a picture of quantum geometry that retains a clear operational interpretation. Perhaps even Percy Bridgman, who dismissed the Planck length as being of “no significance whatever” [3] due to its empirical inaccessibility, would approve.

Boots with laces – Van Gogh

[1] Brown, Physical Relativity, p8.
[2] Rovelli, `Halfway through the woods: contemporary research on space and time’, in The Cosmos of Science, p194.
[3] Bridgman, Dimensional Analysis, p101.

Stop whining and accept these axioms.

One of the stated goals of quantum foundations is to find a set of intuitive physical principles, that can be stated in plain language, from which the essential structure of quantum mechanics can be derived.

So what exactly is wrong with the axioms proposed by Chiribella et. al. in arXiv:1011.6451 ? Loosely speaking, the principles state that information should be localised in space and time, that systems should be able to encode information about each other, and that every process should in principle be reversible, so that information is conserved. The axioms can all be explained using ordinary language, as demonstrated in the sister paper arXiv:1209.5533. They all pertain directly to the elements of human experience, namely, what real experimenters ought to be able to do with the systems in their laboratories. And they all seem quite reasonable, so that it is easy to accept their truth. This is essential, because it means that the apparently counter intuitive behaviour of QM is directly derivable from intuitive principles, much as the counter intuitive aspects of special relativity follow as logical consequences of its two intuitive axioms, the constancy of the speed of light and the relativity principle. Given these features, maybe we can finally say that quantum mechanics makes sense: it is the only way that the laws of physics can lead to a sensible model of information storage and communication!

Let me run through the axioms briefly (note to the wise: I take the `causality’ axiom as implicit, and I’ve changed some of the names to make them sound nicer). I’ll assume the reader is familiar with the distinction between pure states and mixed states, but here is a brief summary. Roughly, a pure state describes a system about which you have maximum information, whereas a mixed state can be interpreted as uncertainty about which pure state the system is really in. Importantly, a pure state does not need to determine the outcomes to every measurement that could be performed on it: even though it contains maximal information about the state, it might only specify the probabilities of what will happen in any given experiment. This is what we mean when we say a theory is `probabilistic’.

First axiom (Distinguishability): if there is a mixed state, for which there is at least one pure state that it cannot possibly be with any probability, then the mixed state must be perfectly distinguishable from some other state (presumably, the aforementioned one). It is hard to imagine how this rule could fail: if I have a bag that contains either a spider or a fly with some probability, I should have no problem distinguishing it from a bag that contains a snake. On the other hand, I can’t so easily tell it apart from another bag that simply contains a fly (at least not in a single trial of the experiment).

Second axiom (Compression): If a system contains any redundant information or `extra space’, it should be possible to encode it in a smaller system such that the information can be perfectly retrieved. For example, suppose I have a badly edited book containing multiple copies of some pages, and a few blank pages at the end. I should be able to store all of the information written in the book in a much smaller book, without losing any information, just by removing the redundant copies and blank pages. Moreover, I should be able to recover the original book by copying pages and adding blank pages as needed. This seems like a pretty intuitive and essential feature of the way information is encoded in physical systems.

Third axiom (Locality of information): If I have a joint system (say, of two particles) that can be in one of two different states, then I should be able to distinguish the two different states over many trials, by performing only local measurements on each individual particle and using classical communication. For example, we allow the local measurements performed on one particle to depend on the outcomes of the local measurements on the other particle. On the other hand, we do not need to make use of any other shared resources (like a second set of correlated particles) in order to distinguish the states. I must admit, out of all the axioms, this one seems the hardest to justify intuitively. What indeed is so special about local operations and classical communication that it should be sufficient to tell different states apart? Why can’t we imagine a world in which the only way to distinguish two states of a joint system is to make use of some other joint system? But let us put this issue aside for the moment.

Fourth axiom (Locality of ignorance): If I have two particles in a joint state that is pure (i.e. I have maximal information about it) and if I measure one of them and find it in a pure state, the axiom states that the other particle must also be in a pure state. This makes sense: if I do a measurement on one subsystem of a pure state that results in still having maximal information about that subsystem, I should not lose any information about the other subsystems during the process. Learning new information about one part of a system should not make me more ignorant of the other parts.

So far, all of the axioms described above are satisfied by classical and quantum information theory. Therefore, at the very least, if any of these axioms do not seem intuitive, it is only because we have not sufficiently well developed our intuitions about classical physics, so it cannot really be taken as a fault of the axioms themselves (which is why I am not so concerned about the detailed justification for axiom 3). The interesting axiom is the last one, `purification’, which holds in quantum physics but not in probabilistic classical physics.

Fifth axiom (Conservation of information) [aka the purification postulate]: Every mixed state of a system can be obtained by starting with several systems in a joint pure state, and then discarding or ignoring all except for the system in question. Thus, the mixedness of any state can be interpreted as ignorance of some other correlated states. Furthermore, we require that the purification be essentially unique: all possible pure states of the total set of systems that do the job must be convertible into one another by reversible transformations.

As stated above, it is not so clear why this property should hold in the world. However, it makes more sense if we consider one of its consequences: every irreversible, probabilistic process can be obtained from a reversible process involving additional systems, which are then ignored. In the same way that statistical mechanics allows us to imagine that we could un-scramble an egg, if only we had complete information about its individual atoms and the power to re-arrange them, the purification postulate says that everything that occurs in nature can be un-done in principle, if we have sufficient resources and information. Another way of stating this is that the loss of information that occurs in a probabilistic process is only apparent: in principle the information is conserved somewhere in the universe and is never lost, even though we might not have direct access to it. The `missing information’ in a mixed state is never lost forever, but can always be accessed by some observer, at least in principle.

It is curious that probabilistic classical physics does not obey this property. Surely it seems reasonable to expect that one could construct a probabilistic classical theory in which information is ultimately conserved! In fact, if one attempts this, one arrives at a theory of deterministic classical physics. In such a theory, having maximal knowledge of a state (i.e. the state is pure) further implies that one can perfectly predict the outcome of any measurement on the state, but this means the theory is no longer probabilistic. Indeed, for a classical theory to be probabilistic in the sense that we have defined the term, it necessarily allows processes in which information is irretrievably lost, violating the spirit of the purification postulate.

In conclusion, I’d say this is pretty close to the mystical “Zing” that we were looking for: quantum mechanics is the only reasonable theory in which processes can be inherently probabilistic while at the same time conserving information.

Why quantum gravity needs operationalism: Part 1

This is the first of a series of posts in which I will argue that physicists can gain insight into the puzzles of quantum gravity if we adopt a philosophy I call operationalism. The traditional interpretation of operationalism by philosophers was found to be lacking in several important ways, so the concept will have to be updated to a modern context if we are to make use of it, and its new strengths and limitations will need to be clarified. The goal of this first post is to introduce you to operationalism as it was originally conceived and as I understand it. Later posts will explain the areas in which it failed as a philosophical doctrine, and why it might nevertheless succeed as a tool in theoretical physics, particularly in regard to quantum gravity [1].

Operationalism started with Percy Williams Bridgman. Bridgman was a physicist working in the early 20th century, at the time when the world of physics was being shaken by the twin revolutions of relativity and quantum mechanics. Einstein’s hand was behind both revolutions: first through the publication of his theory of General Relativity in 1916, and second for explaining the photoelectric effect using things called quanta, which earned him the Nobel prize in 1921. This upheaval was a formative time for Bridgman, who was especially struck by Einstein’s clever use of thought experiments to derive special relativity.

Einstein had realized that there was a problem with the concept of `simultaneity’. Until then, everybody had taken it for granted that if two events are simultaneous, then they occur at the same time no matter who is observing them. But Einstein asked the crucial question: how does a person know that two events happened at the same time? To answer it, he had to adopt an operational definition of simultaneity: an observer traveling at constant velocity will consider two equidistant events to be simultaneous if beams of light emitted from each event reach the location of the observer at the same time, as measured by the observer’s clock (this definition can be further generalised to apply to any pair of events as seen by an observer in arbitrary motion).

From this, one can deduce that the relativity principle implies the relativity of simultaneity: two events that are simultaneous for one observer may not be simultaneous for another observer in relative motion. This is one of the key observations of special relativity. Bridgman noticed that Einstein’s deep insight relied upon taking an abstract concept, in this case simultaneity, and grounding it in the physical world by asking `what sort of operations must be carried out in order to measure this thing’?

For his own part, Bridgman was a brilliant experimentalist who won the Nobel prize in 1946 for his pioneering work on creating extremely high pressures in his laboratory. Using state-of-the-art technology, he created pressures up to 100,000 atmospheres, nearly 100 times greater than anybody before him, and then did what any good scientist would do: he put various things into his pressure chamber to record what happened to them. Mostly, as you might expect, they got squished. At pressures beyond 25,000 atmospheres, steel can be molded like play-dough; at 50,000 atmospheres all normal liquids have frozen solid. (Of course, Bridgman’s vessel had to be very small to withstand such pressure, which limited the things he could put in it). But Bridgman faced a unique problem: the pressures that he created were so high that he couldn’t use any standard pressure gauge to measure the pressures in his lab because the gauge would basically get squished like everything else. The situation is the same as trying to measure the temperature of the sun using a regular thermometer: it would explode and vaporize before you could even take a proper reading. Consequently, Bridgman had no scientific way to tell between `really high pressure’ and `really freaking high pressure’, so he was forced to design completely new ways of measuring pressure in his laboratory, such as looking at the phase transition of the element Bismuth and the resistivity of the alloy Manganin [2]. This led him to wonder: what does a concept like `pressureor `temperature’ really mean in the absence of a measuring technique?

Bridgman proposed that quantities measured by different operations should always be regarded as being fundamentally different, even though they may coincide in certain situations. This led to a minor problem in the definitions of quantities. The temperature of a cup of water is measured by sticking a thermometer in it. The temperature of the sun is measured by looking at the spectrum of radiation emitted from it. If these quantities are measured by such different methods in different regimes, why do we call them both `temperature’? In what sense are our operations measuring the same thing? The solution, according to Bridgman, is that there is a regime in between the two in which both methods of measuring temperature are valid – and in this regime the two measurements must agree. The temperature of molten gold could potentially be measured by the right kind of thermometer, as well as by looking at its radiation spectrum, and both of these methods will give the same temperature. This allows us to connect the concept of temperature on the sun to temperature in your kitchen and call them by the same name.

This method of `patching together’ different ways of measuring the same quantity is reminiscent of placing co-ordinate patches on manifolds in mathematical physics. In general, there is no way to cover an entire manifold (representing space-time for example) with a single set of co-ordinates that are valid everywhere. But we can cover different parts of the manifold in patches, provided that the co-ordinates agree in the areas where they overlap. The key insight is that there is no observer who can see all of space-time at once – any physical observer has to travel from one part of the manifold to another by a continuous route. Hence it does not matter if the observer cannot describe the entire manifold by a single map, so long as they have a series of maps that smoothly translate into one another as they travel along their chosen path – even if the maps used much later in the journey have no connection or overlap with the maps used early in the journey. Similarly, as we extend our measuring devices into new regimes, we must gradually replace them with new devices as we go. The eye is replaced with the microscope, the microscope with the electron microscope and the electron microscope with the particle accelerator, which now bears no resemblance to the eye, although they both gaze upon the same world.

Curiously, there was another man named Bridgman active around the same time, who is likely to be more familiar to artists: that is George Bridgman, author of Bridgman’s Complete Guide to Drawing From Life. Although they were two completely different Bridgmans, working in different disciplines, both of them were concerned with essentially the same problem: how to connect our internal conception of the world with the devices by which we measure the world. In the case of Percy Bridgman, it was a matter of connecting abstract physical quantities to their measurement devices, while George Bridgman aimed to connect the figure in the mind to the functions of the hands and eyes. We close with a quote from the artist:

“Indeed, it is very far from accurate to say that we see with our eyes. The eye is blind but for the idea behind the eye.”

[1] Everything I have written comes from Hasok Chang’s entry in the Stanford Encyclopedia of Philosophy on operationalism, which is both clearer and more thorough than my own ramblings.

[2] Readers interested in the finer points of Percy Bridgman’s work should see his Nobel prize lecture.