Tag Archives: black holes

A meditation on physical units: Part 2

[Preface: This is the second part of my discussion of this paper by Craig Holt. It has a few more equations than usual, so strap a seat-belt onto your brain and get ready!]

“Alright brain. You don’t like me, and I don’t like you, but let’s get through this thing and then I can continue killing you with beer.”    — Homer Simpson

Imagine a whale. We like to say that the whale is big. What does that mean? Well, if we measure the length of the whale, say by comparing it to a meter-stick, we will count up a very large number of meters. However, this only tells us that the whale is big in comparison to a meter-stick. It doesn’t seem to tell us anything about the intrinsic, absolute length of the whale. But what is the meaning of `intrinsic, absolute’ length?

Imagine the whale is floating in space in an empty universe. There are no planets, people, fish or meter-sticks to compare the whale to. Maybe we could say that the whale has the property of length, even though we have no way of actually measuring its length. That’s what `absolute’ length means. We can imagine that it has some actual number, independently of any standard for comparison like a meter-stick.

"Not again!"
“Oh no, not again!”

In Craig’s Holt’s paper, this distinction — between measured and absolute properties — is very important. All absolute quantities have primes (also called apostrophes), so the absolute length of a whale would be written as whale-length’ and the absolute length of a meter-stick is written meter’. The length of the whale that we measure, in meters, can be written as the ratio whale-length’ / meter’ . This ratio is something we can directly measure, so it doesn’t need a prime, we can just call it whale-length: it is the number of meter sticks that equal a whale-length. It is clear that if we were to change all of the absolute lengths in the universe by the same factor, then the absolute properties whale-length’ and meter’ would both change, but the measurable property of whale-length would not change.

Ok, so, you’re probably thinking that it is weird to talk about absolute quantities if we can’t directly measure them — but who says that you can’t directly measure absolute quantities? I only gave you one example where, as it turned out, we couldn’t measure the absolute length. But one example is not a general proof. When you go around saying things like “absolute quantities are meaningless and therefore changes in absolute quantities can’t be detected”, you are making a pretty big assumption. This assumption has a name, it is called Bridgman’s Principle (see the last blog post).

Bridgman’s Principle is the reason why at school they teach you to balance the units on both sides of an equation. For example, `speed’ is measured in units of length per time (no, not milligrams — this isn’t Breaking Bad). If we imagine that light has some intrinsic absolute speed c’, then to measure it we would need to have (for example) some reference length L’ and some reference time duration T’ and then see how many lengths of L’ the light travels in time T’. We would write this equation as:

eq1

where C is the speed that we actually measure. Bridgman’s Principle says that a measured quantity like C cannot tell us the absolute speed of light c’, it only tells us what the value of c’ is compared to the values of our measuring apparatus, L’ and T’ (for example, in meters per second). If there were some way that we could directly measure the absolute value of c’ without comparing it to a measuring rod and a clock, then we could just write c’ = C without needing to specify the units of C. So, without Bridgman’s Principle, all of Dimensional Analysis basically becomes pointless.

So why should Bridgman’s Principle be true in general? Scientists are usually lazy and just assume it is true because it works in so many cases (this is called “proof by induction”). After all, it is hard to find a way of measuring the absolute length of something, without referring to some other reference object like a meter-stick. But being a good scientist is all about being really tight-assed, so we want to know if Bridgman’s Principle can be proven to be watertight.

A neat example of a watertight principle is the Second Law of Thermodynamics. This Law was also originally an inductive principle (it seemed to be true in pretty much all thermodynamic experiments) but then Boltzmann came along with his famous H-Theorem and proved that it has to be true if matter is made up of atomic particles. This is called a constructive justification of the principle [1].

The H Theorem makes it nice and easy to judge whether some crackpot’s idea for a perpetual motion machine will actually run forever. You can just ask them: “Is your machine made out of atoms?” And if the answer is `yes’ (which it probably is), then you can point out that the H-Theorem proves that machines made up of atoms must obey the Second Law, end of story.

Coming up with a constructive proof, like the H-Theorem, is pretty hard. In the case of Bridgman’s Principle, there are just too many different things to account for. Objects can have numerous properties, like mass, charge, density, and so on; also there are many ways to measure each property. It is hard to imagine how we could cover all of these different cases with just a single theorem about atoms. Without the H-Theorem, we would have to look over the design of every perpetual motion machine, to find out where the design is flawed. We could call this method “proof by elimination of counterexamples”. This is exactly the procedure that Craig uses to lend support to Bridgman’s Principle in his paper.

To get a flavor for how he does it, recall our measurement of the speed of light from equation (1). Notice that the measured speed C does not have to be the same as the absolute speed c’. In fact we can rewrite the equation as:

eq2

and this makes it clear that the number C that we measure is not itself an absolute quantity, but rather is a comparison between the absolute speed of light c’ and the absolute distance L’ per time T’. What would happen if we changed all of the absolute lengths in the universe? Would this change the value of the measured speed of light C? At first glance, you might think that it would, as long as the other absolute quantities on the left hand side of equation (2) are independent of length. But if that were true, then we would be able to measure changes in absolute length by observing changes in the measurable speed of light C, and this would contradict Bridgman’s Principle!

To get around this, Craig points out that the length L’ and time T’ are not fundamental properties of things, but are actually reducible to the atomic properties of physical rods and clocks that we use to make measurements. Therefore, we should express L’ and T’ in terms of the more fundamental properties of matter, such as the masses of elementary particles and the coupling constants of forces inside the rods and clocks. In particular, he argues that the absolute length of any physical rod is equal to some number times the “Bohr radius” of a typical atom inside the rod. This radius is in turn proportional to:

eq3

where h’, c’ are the absolute values of Planck’s constant and the speed of light, respectively, and m’e is the absolute electron mass. Similarly, the time duration measured by an atomic clock is proportional to:

eq4

As a result, both the absolute length L’ and time T’ actually depend on the absolute constants c’, h’ and the electron mass m’e. Substituting these into the expression for the measured speed of light, we get:

eq5

where X,Y are some proportionality constants. So, the factors of c’ cancel and we are left with C=X/Y. The numbers X and Y depend on how we construct our rods and clocks — for instance, they depend on how many atoms are inside the rod, and what kind of atom we use inside our atomic clock. In fact, the definition of a `meter’ and a `second’ are specially chosen so as to make this ratio exactly C=299,792,458 [2].

Now that we have included the fact that our measuring rods and clocks are made out of matter, we see that in fact the left hand side of equation (5) is independent of any absolute quantities. Therefore changing the absolute length, time, mass, speed etc. cannot have any effect on the measured speed of light C, and Bridgman’s principle is safe — at least in this example.

(Some readers might wonder why making a clock heavier should also make it run faster, as seems to be suggested by equation (4). It is important to remember that the usual kinds of clocks we use, like wristwatches, are quite complicated things containing trillions of atoms. To calculate how the behaviour of all these atoms would change the ticking of the overall clock mechanism would be, to put it lightly, a giant pain in the ass. That’s why Craig only considers very simple devices like atomic clocks, whose behaviour is well understood at the atomic level [3].)

image credit: xetobyte – A Break in Reality

Another simple model of a clock is the light clock: a beam of light bouncing between two mirrors separated by a fixed distance L’. Since light has no mass, you might think that the frequency of such a clock should not change if we were to increase all absolute masses in the universe. But we saw in equation (4) that the frequency of an atomic clock is proportional to the electron mass, and so it would increase. It then seems like we could measure this increase in atomic clock frequency by comparing it to a light clock, whose frequency does not change — and then we would know that the absolute masses had changed. Is this another threat to Bridgman’s Principle?

The catch is that, as Craig points out, the length L’ between the mirrors of the light clock is determined by a measuring rod, and the rod’s length is inversely proportional to the electron mass as we saw in equation (1). So if we magically increase all the absolute masses, we would also cause the absolute length L’ to get smaller, which means the light-clock frequency would increase. In fact, it would increase by exactly the same amount as the atomic clock frequency, so comparing them would not show us any difference! Bridgman’s Principle is saved again.

Let’s do one more example, this time a little bit more extreme. According to Einstein’s theory of general relativity, every lump of mass has a Schwarzschild radius, which is the radius of a sphere such that if you crammed all of the mass into this sphere, it would turn into a black hole. Given some absolute amount of mass M’, its Schwarzschild radius is given by the equation:

eq6

where c’ is the absolute speed of light from before, and G’ is the absolute gravitational constant, which determines how strong the gravitational force is. Now, glancing at the equation, you might think that if we keep increasing all of the absolute masses in the universe, planets will start turning into black holes. For instance, the radius of Earth is about 6370 km. This is the Schwarzschild radius for a mass of about a million times Earth’s mass. So if we magically increased all absolute masses by a factor of a million, shouldn’t Earth collapse into a black hole? Then, moments before we all die horribly, we would at least know that the absolute mass has changed, and Bridgman’s Principle was wrong.

Of course, that is only true if changing the absolute mass doesn’t affect the other absolute quantities in equation (6). But as we now know, increasing the absolute mass will cause our measuring rods to shrink, and our clocks to run faster. So the question is, if we scale the masses by some factor X, do all the X‘s cancel out in equation (6)?

Well, since our absolute lengths have to shrink, the Schwarzschild radius should shrink, so if we multiply M’ by X, then we should divide the radius R’ by X. This doesn’t balance! Hold on though — we haven’t dealt with the constants c’ and G’ yet. What happens to them? In the case of c’, we have c’ = C L’ / T’. Since L’ and T’ both decrease by a factor of X (lengths and time intervals get shorter) there is no overall effect on the absolute speed of light c’.

How do we measure the quantity G’? Well, G’ tells us how much two masses (measured relative to a reference mass m’) will accelerate towards each other due to their gravitational attraction. Newton’s law of gravitation says:

eq7

where N is some number that we can measure, and it depends on how big the two masses are compared to the reference mass m’, how large the distance between them is compared to the reference length L’, and so forth. If we measure the acceleration a’ using the same reference length and time L’,T’, then we can write:

eq8

where the A is just the measured acceleration in these units. Putting this all together, we can re-arrange equation (7) to get:

eq9

and we can define G = (A/N) as the actually measured gravitational constant in the chosen units. From equation (9), we see that increasing M’ by a factor of X, and hence dividing each instance of L’ and T’ by X, implies that the absolute constant G’ will actually change: it will be divided by a factor of X2.

What is the physics behind all this math? It goes something like this: suppose we are measuring the attraction between two masses separated by some distance. If we increase the masses, then our measuring rods shrink and our clocks get faster. This means that when we measure the accelerations, the objects seem to accelerate faster than before. This is what we expect, because two masses should become more attractive (at the same distance) when they become more massive. However, the absolute distance between the masses also has to shrink. The net effect is that, after increasing all the absolute masses, we find that the masses are producing the exact same attractive force as before, only at a closer distance. This means the absolute attraction at the original distance is weaker — so G’ has become weaker after the absolute masses in the universe have been increased (notice, however, that the actually measured value G does not change).

Diagram of a Cavendish experiment for measuring gravity.

Returning now to equation (6), and multiplying M’ by X, dividing R’ by X and dividing G’ by X2, we find that all the extra factors cancel out. We conclude that increasing all the absolute masses in the universe by a factor of a million will not, in fact, cause Earth to turn into a black hole, because the effect is balanced out by the contingent changes in the absolute lengths and times of our measuring instruments. Whew!

Craig’s paper is long and very thorough. He compares a whole zoo of physical clocks, including electric clocks, light-clocks, freely falling inertial clocks, different kinds of atomic clocks and even gravitational clocks made from two orbiting planets. Not only does he generalize his claim to Newtonian mechanics, he covers general relativity as well, and the Dirac equation of quantum theory, including a discussion of Compton scattering (a photon reflecting off an electron). Besides all of this, he takes pains to discuss the meaning of coupling constants, the Planck scale, and the related but distinct concept of scale invariance. All in all, Craig’s paper just might be the most comprehensive justification for Bridgman’s principle so far in existence!

Most scientists might shrug and say “who needs it?”. In the same way, not many scientists care to examine perpetual motion machines to find out where the flaw lies. In this respect, Craig is a craftsman of the first order — he cares deeply about the details. Unlike the Second Law of Thermodynamics, Bridgman’s Principle seems rarely to have been challenged. This only makes Craig’s defense of it all the more important. After all, it is especially those beliefs which we are disinclined to question that are most deserving of a critical examination.

math

Footnotes:

[1] Some physical principles, like the Relativity Principle, have never been given a constructive justification. For this reason, Einstein himself seems to have regarded the Relativity Principle with some suspicion. See this great discussion by Brown and Pooley.

[2] Why not just set it to N=1? Well, no reason why not! Then we would replace the meter by the `light second’, and the second by the `light-meter’. And we would say things like “Today I walked 0.3 millionths of a light second to buy an ice-cream, and it took me just 130 billion light-meters to eat it!” So, you know, that would be a bit weird. But theorists do it all the time.

[3] To be perfectly strict, we cannot assume that a wristwatch will behave in the same way as an atomic clock in response to changes in absolute properties; we would have to derive their behavior constructively from their atomic description. This is exactly why a general constructive proof of Bridgman’s Principle would be so hard, and why Craig is forced to stick with simple models of clocks and rulers.

Advertisements

The Complexity Horizon

Update 7/3/14: Scott Aaronson, horrified at the prevalence of people who casually consider that P might equal NP (like me in the second last paragraph of this post), has produced an exhaustive explanation of why it is stupid to give much credence to this possibility. Since I find myself in agreement with him, I hereby retract my offhand statement that P=NP might pose a problem for the idea of a physical `complexity horizon’. However, I hereby replace it with a much more damning argument in the form of this paper by Oppenheim and Unruh, which shows how to formulate the firewall paradox such that the complexity horizon is no help whatsoever. Having restored balance to the universe, I now return you to the original post.

There have been a couple of really fascinating developments recently in applying computational complexity theory to problems in physics. Physicist Lenny Susskind has a new paper out on the increasingly infamous firewall paradox of black holes, and mathematician Terry Tao just took a swing at one of the millenium problems (a list of the hardest and most important mathematical problems still unsolved). In brief, Susskind extends an earlier idea of Harlow and Hayden, using computational complexity to argue that black holes cannot be used to break the known laws of physics. Terry Tao is a maths prodigy who first learned arithmetic at age 2 from Sesame Street. He published his first paper at age 15 and was made full professor by age 24. In short, he is a guy to watch (which as it turns out it easy because he maintains an exhaustive blog). In his latest adventure, Tao has suggested a brand new approach to an old problem: proving whether sensible solutions exist to the famous Navier-Stokes equations that describe the flow of fluids like water and air. His big insight was to show that they can be re-interpreted as rules for doing computations using logical gates made out of fluid. The idea is exactly as strange as it sounds (a computer made of water?!) but it might allow mathematicians to resolve the Navier-Stokes question and pick up a cool million from the Clay Mathematics Institute, although there is still a long way to go before that happens. The point is, both Susskind and Tao used the idea from computational complexity theory that physical processes can be understood as computations. If you just said “computational whaaa theory?” then don’t worry, I’ll give you a little background in a moment. But first, you should go read Scott Aaronson’s blog post about this, since that is what inspired me to write the present post.

tao
Ok, first, I will explain roughly what computational complexity theory is all about. Imagine that you have gathered your friends together for a fun night of board games. You start with tic-tac-toe, but after ten minutes you get bored because everyone learns the best strategy and then every game becomes a draw. So you switch to checkers. This is more fun, except that your friend George who is a robot (it is the future, just bear with me) plugs himself into the internet and downloads the world’s best checkers playing algorithm Chinook. After that, nobody in the room can beat him: even when your other robot friend Sally downloads the same software and plays against George, they always end in stalemate. In fact, a quick search on the net reveals that there is no strategy that can beat them anymore – the best you can hope for is a draw. Dang! It is just tic-tac-toe all over again. Finally, you move on to chess. Now things seem more even: although though your robot friends quickly outpace the human players (including your friend Garry Kasparov), battles between the robots are still interesting; each of them is only as good as their software, and there are many competing versions that are constantly being updated and improved. Even though they play at a higher level than human players, it is still uncertain how a given game between two robots will turn out.

chess

After all of this, you begin to wonder: what is it that makes chess harder to figure out than checkers or tic-tac-toe? The question comes up again when you are working on your maths homework. Why are some maths problems easier than others? Can you come up with a way of measuring the `hardness’ of a problem? Well, that is where computational complexity theory comes in: it tells you how `hard’ a problem is to solve, given limited resources.

The limited resources part is important. It turns out that, if you had an infinite amount of time and battery life, you could solve any problem at all using your iPhone, or a pocket calculator. Heck, given infinite time, you could write down every possible chess game by hand, and then find out whether white or black always wins, or if they always draw. Of course, you could do it in shorter time if you got a million people to work on it simultaneously, but then you are using up space for all of those people. Either way, the problem is only interesting when you are limited in how much time or space you have (or energy, or any other resource you care to name). Once you have a resource limit, it makes sense to talk about whether one problem is harder than another (If you want details of how this is done, see for example Aaronson’s blog for his lecture notes on computational complexity theory).

This all seems rather abstract so far. But the study of complexity theory turns out to have some rather interesting consequences in the real world. For example, remember the situation with tic-tac-toe. You might know the strategy that lets you only win or draw. But suppose you were playing a dumb opponent who was not aware of this strategy – they might think that it is possible to beat you. Normally, you could convince them that you are unbeatable by just showing them the strategy so they can see for themselves. Now, imagine a super-smart alien came down to Earth and claimed that, just like with tic-tac-toe, it could never lose at chess. As before, it could always convince us by telling us its strategy — but then we could use the alien’s own strategy against it, and where is the fun in that? Amazingly, it turns out that there is a way that the alien can convince us that it has a winning strategy, without ever revealing the strategy itself! This has been proven by the computational complexity theorists (the method is rather complicated, but you can follow it up here.)

So what has this to do with physics? Let’s start with the black-hole firewall paradox. The usual black-hole information paradox says: since information cannot be destroyed, and information cannot leak out of a black hole, how do we explain what happens to the information (say, on your computer’s hard drive) that falls into a black hole, when the black hole eventually evaporates? One popular solution is to say that the information does leak out of the black hole over time, just very slowly and in a highly scrambled-up form so that it looks just like randomness. The firewall paradox puts a stick in the gears of this solution. It says that if you believe this is true, then it would be possible to violate the laws of quantum mechanics.

Specifically, say you had a quantum system that fell into a black hole. If you gathered all of the leaked information about the quantum state from outside the black hole, and then jumped into the black hole just before it finished evaporating, you could combine this information with whatever is left inside the black hole to obtain more information about the quantum state than would normally be allowed by the laws of physics. To avoid breaking the laws of quantum mechanics, you would have to have a wall of infinite energy density at the event horizon (the firewall) that stops you bringing the outside information to the inside, but this seems to contradict what we thought we knew about black holes (and it upsets Stephen Hawking). So if we try to solve the information paradox by allowing information to leak out of the black hole, we just end up in another paradox!

Firewall
Source: New Scientist

One possible resolution comes from computational complexity theory. It turns out that, before you can break the laws of quantum mechanics, you first have to `unscramble’ all of the information that you gathered from outside the black hole (remember, when it leaks out it still looks very similar to randomness). But you can’t spend all day doing the unscrambling, because you are falling into the black hole and about to get squished at the singularity! Harlow and Hayden showed that in fact you do not have nearly as much time as you would need to unscramble the information before you get squished; it is simply `too hard’ complexity-wise to break the laws of quantum mechanics this way! As Scott Aaronson puts it, the geometry of spacetime is protected by an “armor” of computational complexity, kind of like a computational equivalent of the black hole’s event horizon. Aaronson goes further, speculating that there might be problems that are normally `hard’ to solve, but which become easy if you jump into a black hole! (This is reminiscent of my own musings about whether there might be hypotheses that can only be falsified by an act of black hole suicide).

But the matter is more subtle. For one thing, all of computational complexity theory rests on the belief that some problems are intrinsically harder than others, specifically, that there is no ingenious as-yet undiscovered computer algorithm that will allow us to solve hard problems just as quickly as easy ones (for the nerds out there, I’m just saying nobody has proven that P is not equal to NP). If we are going to take the idea of the black hole complexity horizon seriously, then we must assume this is true — otherwise a sufficiently clever computer program would allow us to bypass the time constraint and break quantum mechanics in the firewall scenario. Whether or not you find this to be plausible, you must admit there may be something fishy about a physical law that requires P not equal to NP in order for it to work.

Furthermore, even if we grant that this is the case, it is not clear that the complexity barrier is that much of a barrier. Just because a problem is hard in general does not mean it can’t be solved in specific instances. It could be that for a sufficiently small black hole and sufficiently large futuristic computing power, the problem becomes tractable, in which case we are back to square one. Given these considerations, I think Aaronson’s faith in the ability of computational complexity to save us from paradoxes might be premature — but perhaps it is worth exploring just in case.