**Update 7/3/14:** *Scott Aaronson, horrified at the prevalence of people who casually consider that P might equal NP (like me in the second last paragraph of this post), has produced an exhaustive explanation of why it is stupid to give much credence to this possibility. Since I find myself in agreement with him, I hereby retract my offhand statement that P=NP might pose a problem for the idea of a physical `complexity horizon’. However, I hereby replace it with a much more damning argument in the form of this paper by Oppenheim and Unruh, which shows how to formulate the firewall paradox such that the complexity horizon is no help whatsoever. Having restored balance to the universe, I now return you to the original post.*

There have been a couple of really fascinating developments recently in applying computational complexity theory to problems in physics. Physicist Lenny Susskind has a new paper out on the increasingly infamous firewall paradox of black holes, and mathematician Terry Tao just took a swing at one of the millenium problems (a list of the hardest and most important mathematical problems still unsolved). In brief, Susskind extends an earlier idea of Harlow and Hayden, using computational complexity to argue that black holes cannot be used to break the known laws of physics. Terry Tao is a maths prodigy who first learned arithmetic at age 2 from Sesame Street. He published his first paper at age 15 and was made full professor by age 24. In short, he is a guy to watch (which as it turns out it easy because he maintains an exhaustive blog). In his latest adventure, Tao has suggested a brand new approach to an old problem: proving whether sensible solutions exist to the famous Navier-Stokes equations that describe the flow of fluids like water and air. His big insight was to show that they can be re-interpreted as rules for doing computations using logical gates made out of fluid. The idea is exactly as strange as it sounds (a computer made of water?!) but it might allow mathematicians to resolve the Navier-Stokes question and pick up a cool million from the Clay Mathematics Institute, although there is still a long way to go before that happens. The point is, both Susskind and Tao used the idea from computational complexity theory that physical processes can be understood as computations. If you just said “computational whaaa theory?” then don’t worry, I’ll give you a little background in a moment. But first, you should go read Scott Aaronson’s blog post about this, since that is what inspired me to write the present post.

Ok, first, I will explain roughly what computational complexity theory is all about. Imagine that you have gathered your friends together for a fun night of board games. You start with tic-tac-toe, but after ten minutes you get bored because everyone learns the best strategy and then every game becomes a draw. So you switch to checkers. This is more fun, except that your friend George who is a robot (it is the future, just bear with me) plugs himself into the internet and downloads the world’s best checkers playing algorithm Chinook. After that, nobody in the room can beat him: even when your other robot friend Sally downloads the same software and plays against George, they always end in stalemate. In fact, a quick search on the net reveals that there is no strategy that can beat them anymore – the best you can hope for is a draw. Dang! It is just tic-tac-toe all over again. Finally, you move on to chess. Now things seem more even: although though your robot friends quickly outpace the human players (including your friend Garry Kasparov), battles between the robots are still interesting; each of them is only as good as their software, and there are many competing versions that are constantly being updated and improved. Even though they play at a higher level than human players, it is still uncertain how a given game between two robots will turn out.

After all of this, you begin to wonder: what is it that makes chess harder to figure out than checkers or tic-tac-toe? The question comes up again when you are working on your maths homework. Why are some maths problems easier than others? Can you come up with a way of measuring the `hardness’ of a problem? Well, that is where computational complexity theory comes in: it tells you how `hard’ a problem is to solve, given limited resources.

The limited resources part is important. It turns out that, if you had an infinite amount of time and battery life, you could solve any problem at all using your iPhone, or a pocket calculator. Heck, given infinite time, you could write down every possible chess game *by hand*, and then find out whether white or black always wins, or if they always draw. Of course, you could do it in shorter time if you got a million people to work on it simultaneously, but then you are using up space for all of those people. Either way, the problem is only interesting when you are limited in how much time or space you have (or energy, or any other resource you care to name). Once you have a resource limit, it makes sense to talk about whether one problem is harder than another (If you want details of how this is done, see for example Aaronson’s blog for his lecture notes on computational complexity theory).

This all seems rather abstract so far. But the study of complexity theory turns out to have some rather interesting consequences in the real world. For example, remember the situation with tic-tac-toe. You might know the strategy that lets you only win or draw. But suppose you were playing a dumb opponent who was not aware of this strategy – they might think that it is possible to beat you. Normally, you could convince them that you are unbeatable by just showing them the strategy so they can see for themselves. Now, imagine a super-smart alien came down to Earth and claimed that, just like with tic-tac-toe, it could never lose at chess. As before, it could always convince us by telling us its strategy — but then we could use the alien’s own strategy against it, and where is the fun in that? Amazingly, it turns out that there is a way that the alien can convince us that it has a winning strategy, without ever revealing the strategy itself! This has been proven by the computational complexity theorists (the method is rather complicated, but you can follow it up here.)

So what has this to do with physics? Let’s start with the black-hole firewall paradox. The usual black-hole information paradox says: since information cannot be destroyed, and information cannot leak out of a black hole, how do we explain what happens to the information (say, on your computer’s hard drive) that falls into a black hole, when the black hole eventually evaporates? One popular solution is to say that the information *does* leak out of the black hole over time, just very slowly and in a highly scrambled-up form so that it looks just like randomness. The firewall paradox puts a stick in the gears of this solution. It says that if you believe this is true, then it would be possible to violate the laws of quantum mechanics.

Specifically, say you had a quantum system that fell into a black hole. If you gathered all of the leaked information about the quantum state from outside the black hole, and then jumped into the black hole just before it finished evaporating, you could combine this information with whatever is left inside the black hole to obtain more information about the quantum state than would normally be allowed by the laws of physics. To avoid breaking the laws of quantum mechanics, you would have to have a wall of infinite energy density at the event horizon (the firewall) that stops you bringing the outside information to the inside, but this seems to contradict what we thought we knew about black holes (and it upsets Stephen Hawking). So if we try to solve the information paradox by allowing information to leak out of the black hole, we just end up in another paradox!

One possible resolution comes from computational complexity theory. It turns out that, before you can break the laws of quantum mechanics, you first have to `unscramble’ all of the information that you gathered from outside the black hole (remember, when it leaks out it still looks very similar to randomness). But you can’t spend all day doing the unscrambling, because you are falling into the black hole and about to get squished at the singularity! Harlow and Hayden showed that in fact you do not have nearly as much time as you would need to unscramble the information before you get squished; it is simply `too hard’ complexity-wise to break the laws of quantum mechanics this way! As Scott Aaronson puts it, the geometry of spacetime is protected by an “armor” of computational complexity, kind of like a computational equivalent of the black hole’s event horizon. Aaronson goes further, speculating that there might be problems that are normally `hard’ to solve, but which become easy if you jump into a black hole! (This is reminiscent of my own musings about whether there might be hypotheses that can only be falsified by an act of black hole suicide).

But the matter is more subtle. For one thing, all of computational complexity theory rests on the belief that some problems are intrinsically harder than others, specifically, that there is no ingenious as-yet undiscovered computer algorithm that will allow us to solve hard problems just as quickly as easy ones (for the nerds out there, I’m just saying nobody has proven that P is not equal to NP). If we are going to take the idea of the black hole complexity horizon seriously, then we must assume this is true — otherwise a sufficiently clever computer program would allow us to bypass the time constraint and break quantum mechanics in the firewall scenario. Whether or not you find this to be plausible, you must admit there may be something fishy about a physical law that requires P not equal to NP in order for it to work.

Furthermore, even if we grant that this is the case, it is not clear that the complexity barrier is that much of a barrier. Just because a problem is hard in general does not mean it can’t be solved in specific instances. It could be that for a sufficiently small black hole and sufficiently large futuristic computing power, the problem becomes tractable, in which case we are back to square one. Given these considerations, I think Aaronson’s faith in the ability of computational complexity to save us from paradoxes might be premature — but perhaps it is worth exploring just in case.