Death to Powerpoint!

There is one thing that has always baffled me about academia, and theoretical physics in particular. Here we have a community of people whose work — indeed, whose very careers — depend on their ability to communicate complex ideas to each other and to the broader public in order to secure funding for their projects. To be an effective working physicist, you basically have to do three things: publish papers, go to conferences, and give presentations. LOTS of presentations. In principle, this should be easy; we are usually talking to a receptive audience of our peers or educated outsiders, we presumably know the subject matter backwards and many of us have had years of experience giving public talks. So can someone please tell me why the heck so many physicists are still so bad at it?

Now before you start trying to guess if I am ranting about anyone in particular, let me set your mind at ease — I am talking about everybody, probably including you, and certainly including myself (well, up to a point). I except only those few speakers in physics who really know how to engage their audience and deliver an effective presentation (if you know any examples, please post names or links in the comments, I want to catalog these guys like rare insects). But instead of complaining about it, I am going to try and perpetuate a solution. There is an enemy in our midst: slide shows. We are crippling our communication skills by our unspoken subservience to the idea that a presentation that doesn’t contain at least 15 slides with graphs and equations does not qualify as legitimate science.

The Far Side

Let me set the record straight: the point of a presentation is not to convince people that you are a big important scientist who knows what he is doing. We already know that, and if you are in fact just an imposter, probably we already know that too. Away with pretenses, with insecurities that force you to obfusticate the truth. The truth is: you are stupid, but you are trying your best to do science. Your audience is also stupid, but they are trying their best to understand you. We are a bunch of dumb, ignorant smelly humans groping desperately for a single grain of the truth, and we will never get that truth so long as we dress ourselves up like geniuses who know it all. Let’s just be open about it. Those people in your talk, who look so sharp and attentive and nod their heads sagely when you speak, but ask no questions — you can be sure they have no damn clue what is going on. And you, the speaker, are not there to toot your trumpet or parade up and down showing everyone how magnanimously you performed real calculations or did real experiments with things of importance — you are there to communicate ideas, and nothing else. Humble yourself before your audience, invite them to eviscerate you (figuratively), put everything at stake for the truth and they will joint you instead of attacking you. They might then be willing to ask you the REAL questions — instead of those pretend questions we all know are designed to show everyone else how smart they are because they already know the answer to them*

*(I am guilty of this, but I balance it out by asking an equal number of really dumb questions).

I don’t want questions from people who have understood my talk perfectly and are merely demonstrating this fact to everyone else in the room: I want dumb questions, obvious questions, offensive questions, real questions that strike at the root of what is going on. Life is too short to beat around the bush, let’s just cut to the chase and do some damn physics! You don’t know what that symbol means? Ask me! If I’m wrong I’m wrong, if your question is dumb, it’s dumb, but I’ll answer it anyway and we can move on like adults.

Today I trialed a new experiment of mine: I call it the “One Slide Wonder”. I gave a one hour presentation based on one slide. I think it was a partial success, but needs refinement. For anyone who wants to get on board with this idea, the rules are as follows:

1. Thou shalt make thine presentation with only a single slide.

2. The slide shalt contain things that stimulate discussions and invite questions, or serve as handy references, but NOT detailed proofs or lengthy explanations. These will come from your mouth and chalk-hand.

3. The time spent talking about the slide shalt not exceed the time that could reasonably be allotted to a single slide, certainly not more than 10-15 minutes.

4. After this time, thou shalt invite questions, and the discussion subsists thereupon for the duration of the session or until such a time as it wraps up in a natural way.

To some people, this might seem terrifying: what if nobody has any questions? What if I present my one slide, everyone coughs in awkward silence, and I have still 45 minutes to fill? Do I have to dance a jig or sing aloud for them? It is just like my childhood nightmares! To those who fear this scenario, I say: be brave. You know why talks always run overtime? Because the audience is bursting with questions and they keep interrupting the speaker to clarify things. This is usually treated like a nuisance and the audience is told to “continue the discussion in question time”, except there isn’t any question time because there were too many fucking slides.

So let’s give them what they want: a single slide that we can all discuss to our heart’s content. You bet it can take an hour. Use your power as the speaker to guide the topic of discussion to what you want to talk about. Use the blackboard. Get covered in chalk, give the chalk to the audience, get interactive, encourage excitement — above all, destroy the facade of endless slides and break through to the human beings who are sitting there trying to talk back to you. If you want to be sure to incite discussion, just write some deliberately provocative statement on your slide and then stand there and wait. No living physicist can resist the combined fear of an awkward silence, coupled to the desire to challenge your claim that the many-worlds interpretation can be tested. And finally, in the absolute worst case scenario, nobody has any questions after your one slide and then you just say “Thank you” and take a seat, and you will go down in history as having given the most concise talk ever.PhD Comics

The Zen of the Quantum Omlette

[Quantum mechanics] is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature, all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about; it is just that simple.” [1]

– E. T. Jaynes

Note: this post is about foundational issues in quantum mechanics, which means it is rather long and may be boring to non-experts (not to mention a number of experts). I’ve tried to use simple language so that the adventurous layman can nevertheless still get the gist of it, if he or she is willing (hey, fortune favours the brave).

As I’ve said before, I think research on the foundations of quantum mechanics is important. One of the main goals of work on foundations (perhaps the main goal) is to find a set of physical principles that can be stated in common language, but can also be implemented mathematically to obtain the model that we call `quantum mechanics’.

Einstein was a big fan of starting with simple intuitive principles on which a more rigorous theory is based. The special and general theories of relativity are excellent examples. Both are based on the `Principle of Relativity’, which states (roughly) that motion between two systems is purely relative. We cannot say whether a given system is truly in motion or not; the only meaningful question is whether the system is moving relative to some other system. There is no absolute background space and time in which objects move or stand still, like actors on a stage. In fact there is no stage at all, only the mutual distances between the actors, as experienced by the actors themselves.

The way I have stated the principle is somewhat vague, but it has a clear philosophical intention which can be taken as inspiration for a more rigorous theory. Of particular interest is the identification of a concept that is argued to be meaningless or illusory — in this case the concept of an object having a well-defined motion independent of other objects. One could arrive at the Principle of Relativity by noticing an apparent conspiracy in the laws of nature, and then invoking the principle as a means of avoiding the conspiracy. If we believe that motion is absolute, then we should find it mighty strange that we can play a game of ping-pong on a speeding train, without getting stuck to the wall. Indeed, if it weren’t for the scenery flying past, how would we know we were traveling at all? And even then, as the phrasing suggests, could we not easily imagine that it is the scenery moving past us while we remain still? Why, then, should Nature take such pains to hide from us the fact that we are in motion? The answer is the Zen of relativity — Nature does not conceal our true motion from us, instead, there is no absolute motion to speak of.

A similar leap is made from the special to the general theory of relativity. If we think of gravity as being a field, just like the electromagnetic field, then we notice a very strange coincidence: the charge of an object in the gravitational field is exactly equal to its inertial mass. By contrast, a particle can have an electric charge completely unrelated to its inertia. Why this peculiar conspiracy between gravitational charge and inertial mass? Because, quoth Einstein, they are the same thing. This is essentially the `Principle of Equivalence’ on which Einstein’s theory of gravity is based.

Einstein

These considerations tell us that to find the deep principles in quantum mechanics, we have to look for seemingly inexplicable coincidences that cry out for explanation. In this post, I’ll discuss one such possibility: the apparent equivalence of two conceptually distinct types of probabilistic behaviour, that due to ignorance and that due to objective uncertainty. The argument runs as follows. Loosely speaking, in classical physics, one does not seem to require any notion of objective randomness or inherent uncertainty. In particular, it is always possible to explain observations using a physical model that is ontologically within the bounds of classical theory and such that all observable properties of a system are determined with certainty. In this sense, any uncertainty arising in classical experiments can always be regarded as our ignorance of the true underlying state of affairs, and we can perfectly well conceive of a hypothetical perfect experiment in which there is no uncertainty about the outcomes.

This is not so easy to maintain in quantum mechanics: any attempt to conceive of an underlying reality without uncertainty seems to result in models of the world that violate dearly-held principles, like the idea that signals cannot propagate faster than light, and experimenters have free will. This has prompted many of us to allow some amount of `objective’ uncertainty into our picture of the world, where even the best conceivable experiments must have some uncertain outcomes. These outcomes are unknowable, even in principle, until the moment that we choose to measure them (and the very act of measurement renders certain other properties unknowable). The presence of these two kinds of randomness in physics — the subjective randomness, which can always be removed by some hypothetical improved experiment, and the objective kind of randomness, which cannot be so removed — leads us into another dilemma, namely, where is the boundary that separates these two kinds of uncertainty?

E.T. Jaynes
“Are you talkin’ to me?”

Now at last we come to the `omelette’ that badass statistician and physicist E.T. Jaynes describes in the opening quote. Since quantum systems are inherently uncertain objects, how do we know how much of that uncertainty is due to our own ignorance, and how much of it is really `inside’ the system itself? Views range from the extreme subjective Bayesian (all uncertainty is ignorance) to various other extremes like the many-worlds interpretation (in which, arguably, the opposite holds: all uncertainty is objective). But a number of researchers, particularly those in the quantum information community, opt for a more Zen-like answer: the reason we can’t tell the difference between objective and subjective probability is that there is no difference. Asking whether the quantum state describes my personal ignorance about something, or whether the state “really is” uncertain, is a meaningless question. But can we take this Zen principle and turn it into something concrete, like the Relativity principle, or are we just by semantics avoiding the problem?

I think there might be something to be gained from taking this idea seriously and seeing where it leads. One way of doing this is to show that the predictions of quantum mechanics can be derived by taking this principle as an axiom. In this paper by Chiribella et. al., the authors use the “Purification postulate”, plus some other axioms, to derive quantum theory. What is the Purification postulate? It states that “the ignorance about a part is always compatible with a maximal knowledge of the whole”. Or, in my own words, the subjective ignorance of one system about another system can always be regarded as the objective uncertainty inherent in the state that encompasses both.

There is an important side comment to make before examining this idea further. You’ll notice that I have not restricted my usage of the word `ignorance’ to human experimenters, but that I take it to apply to any physical system. This idea also appears in relativity, where an “observer in motion” can refer to any object in motion, not necessarily a human. Similarly, I am adopting here the viewpoint of the information theorists, which says that two correlated or interacting systems can be thought of as having information about each other, and the quantification of this knowledge entails that systems — not just people — can be ignorant of each other in some sense. This is important because I think that an overly subjective view of probabilities runs the risk of concealing important physics behind the definition of the `rational agent’, which to me is a rather nebulous concept. I prefer to take the route of Rovelli and make no distinction between agents and generic physical systems. I think this view fits quite naturally with the Purification postulate.

In the paper by Chiribella et. al., the postulate is given a rigorous form and used to derive quantum theory. This alone is not quite enough, but it is, I think, very compelling. To establish the postulate as a physical principle, more work needs to be done on the philosophical side. I will continue to use Rovelli’s relational interpretation of quantum mechanics as an integral part of this philosophy (for a very readable primer, I suggest his FQXi essay).

In the context of this interpretation, the Purification postulate makes more sense. Conceptually, the quantum state does not represent information about a system in isolation, but rather it represents information about a system relative to another system. It is as meaningless to talk about the quantum state of an isolated system as it is to talk about space-time without matter (i.e. Mach’s principle [2]). The only meaningful quantities are relational quantities, and in this spirit we consider the separation of uncertainty into subjective and objective parts to be relational and not fundamental. Can we make this idea more precise? Perhaps we can, by associating subjective and objective uncertainty with some more concrete physical concepts. I’ll probably do that in a follow up post.

I conclude by noting that there are other aspects of quantum theory that cry out for explanation. If hidden variable accounts of quantum mechanics imply elements of reality that move faster than light, why does Nature conspire to prevent us using them for sending signals faster than light? And since the requirement of no faster-than-light signalling still allows correlations that are stronger than entanglement, why does entanglement stop short of that limit? I think there is still a lot that could be done in trying to turn these curious observations into physical principles, and then trying to build models based on them.

The Complexity Horizon

Update 7/3/14: Scott Aaronson, horrified at the prevalence of people who casually consider that P might equal NP (like me in the second last paragraph of this post), has produced an exhaustive explanation of why it is stupid to give much credence to this possibility. Since I find myself in agreement with him, I hereby retract my offhand statement that P=NP might pose a problem for the idea of a physical `complexity horizon’. However, I hereby replace it with a much more damning argument in the form of this paper by Oppenheim and Unruh, which shows how to formulate the firewall paradox such that the complexity horizon is no help whatsoever. Having restored balance to the universe, I now return you to the original post.

There have been a couple of really fascinating developments recently in applying computational complexity theory to problems in physics. Physicist Lenny Susskind has a new paper out on the increasingly infamous firewall paradox of black holes, and mathematician Terry Tao just took a swing at one of the millenium problems (a list of the hardest and most important mathematical problems still unsolved). In brief, Susskind extends an earlier idea of Harlow and Hayden, using computational complexity to argue that black holes cannot be used to break the known laws of physics. Terry Tao is a maths prodigy who first learned arithmetic at age 2 from Sesame Street. He published his first paper at age 15 and was made full professor by age 24. In short, he is a guy to watch (which as it turns out it easy because he maintains an exhaustive blog). In his latest adventure, Tao has suggested a brand new approach to an old problem: proving whether sensible solutions exist to the famous Navier-Stokes equations that describe the flow of fluids like water and air. His big insight was to show that they can be re-interpreted as rules for doing computations using logical gates made out of fluid. The idea is exactly as strange as it sounds (a computer made of water?!) but it might allow mathematicians to resolve the Navier-Stokes question and pick up a cool million from the Clay Mathematics Institute, although there is still a long way to go before that happens. The point is, both Susskind and Tao used the idea from computational complexity theory that physical processes can be understood as computations. If you just said “computational whaaa theory?” then don’t worry, I’ll give you a little background in a moment. But first, you should go read Scott Aaronson’s blog post about this, since that is what inspired me to write the present post.

tao
Ok, first, I will explain roughly what computational complexity theory is all about. Imagine that you have gathered your friends together for a fun night of board games. You start with tic-tac-toe, but after ten minutes you get bored because everyone learns the best strategy and then every game becomes a draw. So you switch to checkers. This is more fun, except that your friend George who is a robot (it is the future, just bear with me) plugs himself into the internet and downloads the world’s best checkers playing algorithm Chinook. After that, nobody in the room can beat him: even when your other robot friend Sally downloads the same software and plays against George, they always end in stalemate. In fact, a quick search on the net reveals that there is no strategy that can beat them anymore – the best you can hope for is a draw. Dang! It is just tic-tac-toe all over again. Finally, you move on to chess. Now things seem more even: although though your robot friends quickly outpace the human players (including your friend Garry Kasparov), battles between the robots are still interesting; each of them is only as good as their software, and there are many competing versions that are constantly being updated and improved. Even though they play at a higher level than human players, it is still uncertain how a given game between two robots will turn out.

chess

After all of this, you begin to wonder: what is it that makes chess harder to figure out than checkers or tic-tac-toe? The question comes up again when you are working on your maths homework. Why are some maths problems easier than others? Can you come up with a way of measuring the `hardness’ of a problem? Well, that is where computational complexity theory comes in: it tells you how `hard’ a problem is to solve, given limited resources.

The limited resources part is important. It turns out that, if you had an infinite amount of time and battery life, you could solve any problem at all using your iPhone, or a pocket calculator. Heck, given infinite time, you could write down every possible chess game by hand, and then find out whether white or black always wins, or if they always draw. Of course, you could do it in shorter time if you got a million people to work on it simultaneously, but then you are using up space for all of those people. Either way, the problem is only interesting when you are limited in how much time or space you have (or energy, or any other resource you care to name). Once you have a resource limit, it makes sense to talk about whether one problem is harder than another (If you want details of how this is done, see for example Aaronson’s blog for his lecture notes on computational complexity theory).

This all seems rather abstract so far. But the study of complexity theory turns out to have some rather interesting consequences in the real world. For example, remember the situation with tic-tac-toe. You might know the strategy that lets you only win or draw. But suppose you were playing a dumb opponent who was not aware of this strategy – they might think that it is possible to beat you. Normally, you could convince them that you are unbeatable by just showing them the strategy so they can see for themselves. Now, imagine a super-smart alien came down to Earth and claimed that, just like with tic-tac-toe, it could never lose at chess. As before, it could always convince us by telling us its strategy — but then we could use the alien’s own strategy against it, and where is the fun in that? Amazingly, it turns out that there is a way that the alien can convince us that it has a winning strategy, without ever revealing the strategy itself! This has been proven by the computational complexity theorists (the method is rather complicated, but you can follow it up here.)

So what has this to do with physics? Let’s start with the black-hole firewall paradox. The usual black-hole information paradox says: since information cannot be destroyed, and information cannot leak out of a black hole, how do we explain what happens to the information (say, on your computer’s hard drive) that falls into a black hole, when the black hole eventually evaporates? One popular solution is to say that the information does leak out of the black hole over time, just very slowly and in a highly scrambled-up form so that it looks just like randomness. The firewall paradox puts a stick in the gears of this solution. It says that if you believe this is true, then it would be possible to violate the laws of quantum mechanics.

Specifically, say you had a quantum system that fell into a black hole. If you gathered all of the leaked information about the quantum state from outside the black hole, and then jumped into the black hole just before it finished evaporating, you could combine this information with whatever is left inside the black hole to obtain more information about the quantum state than would normally be allowed by the laws of physics. To avoid breaking the laws of quantum mechanics, you would have to have a wall of infinite energy density at the event horizon (the firewall) that stops you bringing the outside information to the inside, but this seems to contradict what we thought we knew about black holes (and it upsets Stephen Hawking). So if we try to solve the information paradox by allowing information to leak out of the black hole, we just end up in another paradox!

Firewall
Source: New Scientist

One possible resolution comes from computational complexity theory. It turns out that, before you can break the laws of quantum mechanics, you first have to `unscramble’ all of the information that you gathered from outside the black hole (remember, when it leaks out it still looks very similar to randomness). But you can’t spend all day doing the unscrambling, because you are falling into the black hole and about to get squished at the singularity! Harlow and Hayden showed that in fact you do not have nearly as much time as you would need to unscramble the information before you get squished; it is simply `too hard’ complexity-wise to break the laws of quantum mechanics this way! As Scott Aaronson puts it, the geometry of spacetime is protected by an “armor” of computational complexity, kind of like a computational equivalent of the black hole’s event horizon. Aaronson goes further, speculating that there might be problems that are normally `hard’ to solve, but which become easy if you jump into a black hole! (This is reminiscent of my own musings about whether there might be hypotheses that can only be falsified by an act of black hole suicide).

But the matter is more subtle. For one thing, all of computational complexity theory rests on the belief that some problems are intrinsically harder than others, specifically, that there is no ingenious as-yet undiscovered computer algorithm that will allow us to solve hard problems just as quickly as easy ones (for the nerds out there, I’m just saying nobody has proven that P is not equal to NP). If we are going to take the idea of the black hole complexity horizon seriously, then we must assume this is true — otherwise a sufficiently clever computer program would allow us to bypass the time constraint and break quantum mechanics in the firewall scenario. Whether or not you find this to be plausible, you must admit there may be something fishy about a physical law that requires P not equal to NP in order for it to work.

Furthermore, even if we grant that this is the case, it is not clear that the complexity barrier is that much of a barrier. Just because a problem is hard in general does not mean it can’t be solved in specific instances. It could be that for a sufficiently small black hole and sufficiently large futuristic computing power, the problem becomes tractable, in which case we are back to square one. Given these considerations, I think Aaronson’s faith in the ability of computational complexity to save us from paradoxes might be premature — but perhaps it is worth exploring just in case.

Art and science, united by chickens.

When I saw that Anton Zeilinger of the Vienna quantum physics department was hosting a talk by the artist Koen Vanmechelen on the topic of chickens, I dropped everything and ran there in great excitement.

“It has finally happened,” I said to myself, “the great Zeilinger has finally lost his marbles!”


I was wrong, though: it was one of the most interesting talks of the year so far. Vanmechelen began his talk with a stylish photograph of a chicken. He said:

“To you, this might look like just a chicken. But to me, this is a work of art.”

chicken2

It seemed absurd — here was a room full of physicists, being told that a chicken was art. But as Vanmechelen elaborated on his work, I saw that his work was not simply about chickens, in the same way that Rembrandt’s art was not simply about paint. In Vanmechelen’s words “It is not about the chicken, it is about humans!” Chickens are merely the medium through which Vanmechelen has chosen to express himself. Humans have such precise control over chickens, we breed them for specific purposes, we use them like components in a factory; no wonder Vanmechelen calls the chicken `high-tech’. So why not also use chickens as an artistic medium? Vanmechelen also enjoys working with glass, a seemingly unrelated medium, except that it allows him a similar level of self-expression and self discovery:

“I like the transparency of glass. You cannot see a window until it is broken. It is the same with people — it is through scars that we come to know ourselves.”

For Vanmechelen, part of his motivation to work with chickens comes from the strange and often profound experiences that this line of work leads him to. One notorious example was his idea to rescue a rooster that had lost one of its spurs. Perhaps to reinstate some of the glory afforded the chicken by its dinosaur heritage, Vanmechelen had surgeons give the rooster a proud new pair of golden spurs.

goldspur
Shortly afterwards, Vanmechelen was taken to court in Belgium by animal rights activists. It seemed that, by the letter of the law, it was illegal to give chickens prosthetic implants. Vanmechelen defended his work and pointed out that he was helping the rooster, which would have otherwise been an outcast in chicken society, and the activists finally agreed with him. But the judge was adamant: there was still the matter of the law to be settled. Struck by the absurdity of the case, Vanmechelen asked: if prosthetic augmentation was not allowed, then what precisely was it legal to do to a live chicken? The judge unfolded an official document and read from a list. Legally, one could burn its beak, scorch its wings, cut its legs, and more in a similar vein. Needless to say, Vanmechelen did not have to face prison, but the incident stayed with him.

“I am not a scientist, I am not an activist, I am an artist. I do not pass judgement, I simply comment on what I see.”

He called the animal rights activists afterwards. He said to them, “I have done my job as an artist. Now you can do your job as an activist: change the law”.

Vanmechelen’s major work has much less to do with chickens and much more to do with people. The Cosmopolitan Chicken Project is an exercise in fertility. Travelling around the world, Vanmechelen collects chickens that have been selectively bred to suit their country of origin, and creates cross-breeds. He notes that each country has developed a breed of chicken that represents the nation; as an extreme example, the French Poulet de Bresse has a red crest, white body and blue-tinged legs, matching the country’s flag.

Image credit: the internets.
Poulet de Bresse

“When you put an animal in a frame, you halt its evolution,” he explains. “The chickens become infertile through too much inbreeding. Cross-breeding restores life and fertility to the species. It is the same for humans.”

Duality is also a major theme in Vanmechelen’s work: every organism needs another organism to survive. Humans have not simply enslaved chickens — we are in turn enslaved by them. There are over 24 billion chickens in the world today, about three and a half per person. Historically, we have taken them everywhere with us, to such an extent that researchers at the University of Nottingham can even trace the movements of humans through the genomes of chickens.

This duality can be seen directly in the theory of coding and information. Take two messages of the same length and combine them by swapping every second letter between the two. Suppose we separate the resulting scrambled halves and give them to different people. It doesn’t matter how many times you copy one half, you will never recover the message — you will stagnate from inbreeding the same information. But if you get together with someone who has different information that comes from the other half, you can combine your halves to discover the hidden message that was there all along.

By the end of Vanmechelen’s talk, I finally understood why Professor Zeilinger had invited him here, to a physics department, to talk about art. In isolation, every discipline stagnates and becomes inbred. I rarely go to see talks by scientists, but I always find talks by artists stimulating. Why is that? Perhaps the reason is not that scientists are dull, but simply that I am one of them. Sometimes, to unlock the riches of your own discipline, you need to introduce random mutations from the outside. So bring on the artists!

Vanmechelen

Stories to explode your mind.

As you might have guessed from posts like this one, I am a huge fan of the technique of using a fictional story to get across an idea or a concept. The following are links to some of my favourite examples of this underrated art form, more or less in order of preference.

1. Scott Aaronson, On Self-Delusion and Bounded Rationality. Clearly inspired by the classic sci-fi story “Flowers for Algernon“, Aaronson’s own fable is a meditation on what it means to be rational.

2. Nick Bostrom, The Fable of the Dragon-Tyrant. I’ve mentioned Bostrom before, but I was unaware of his storytelling powers until I came across this gem. Here, he weaves a story that cleverly gets the reader on his side, before drawing back the curtain to show us what was really at stake the whole time.

3. Eliezer Yudkowski, Zombies: The Movie. For a change of tone, I love this light-hearted jab at philosopher David Chalmer’s idea of a philosophical zombie, in the form of a movie-script.

Finally, although it is in a somewhat different vein to the above links, I have to mention the work of writer Greg Egan, which epitomizes the concept of “hard sci-fi”: flights of the imagination conceived not only in the spirit of modern science, but in its very clothing. Excerpts from his novels and complete versions of his short stories can be read online here.

Black holes, bananas, and falsifiability.

Previously I gave a poor man’s description of the concept of `falsifiability‘, which is a cornerstone of what most people consider to be good science. This is usually expressed in a handy catchphrase like `if it isn’t falsifiable, then it isn’t science’. For the layperson, this is a pretty good rule of thumb. A professional scientist or philosopher would be more inclined to wonder about the converse: suppose it is falsifiable, does that guarantee that it is science? Karl Popper, the man behind the idea, has been quoted as saying that basically yes, not only must a scientific theory be falsifiable, a falsifiable theory is also scientific [1]. However, critics have pointed out that it is possible to have theories that are not scientific and yet can still be falsified. A classic example is Astrology, which has been “thoroughly tested and refuted” [2], (although sadly this has not stopped many people from believing in it). Given that it is falsifiable (and falsified), it seems one must therefore either concede that Astrology was a scientific hypothesis which has since been disproved, or else concede that we need something more than just falsifiability to distinguish science from pseudo-science.

Things are even more subtle than that, because a falsifiable statement may appear more or less scientific depending on the context in which it is framed. Suppose that I have a theory which says that there is cheese inside the moon. We could test this theory, perhaps by launching an expensive space mission to drill the moon for cheese, but nobody would ever fund such a mission because the theory is clearly ludicrous. Why is it ludicrous? Because within our existing theoretical framework and our knowledge of planet formation, there is no role played by astronomical cheese. However, imagine that we lived in a world in which it was discovered that cheese was naturally occurring substance in space and indeed had a crucial role to play in the formation of planets. In some instances, the formations of moons might lead to them retaining their cheese substrate, hidden by layers of meteorite dust. Within this alternative historical framework, the hypothesis that there is cheese inside the moon is actually a perfectly reasonable scientific hypothesis.

Wallace and Gromit
Yes, but does it taste like Wensleydale?

The lesson here is that the demarcation problem between science and pseudoscience (not to mention non-science and un-science which are different concepts [2]) is not a simple one. In particular, we must be careful about how we use ideas like falsification to judge the scientific content of a theory. So what is the point of all this pontificating? Well, recently a prominent scientist and blogger Sean Carroll argued that the scientific idea of falsification needs to be “retired”. In particular, he argued that String Theory and theories with multiple universes have been unfairly branded as `unfalsifiable’ and thus not been given the recognition by scientists that they deserve. Naturally, this alarmed people, since it really sounded like Sean was saying `scientific theories don’t need to be falsifiable’.

In fact, if you read Sean’s article carefully, he argues that it is not so much the idea of falsifiability that needs to be retired, but the incorrect usage of the concept by scientists without sufficient philosophical education. In particular, he suggests that String Theory and multiverse theories are falsifiable in a useful sense, but that this fact is easily missed by people who do not understand the subtleties of falsifiability:

“In complicated situations, fortune-cookie-sized mottos like `theories should be falsifiable’ are no substitute for careful thinking about how science works.”

Well, one can hardly argue against that! Except that Sean has committed a couple of minor crimes in the presentation of his argument. First, while Sean’s actual argument (which almost seems to have been deliberately disguised for the sake of sensationalism) is reasonable, his apparent argument would lead most people to draw the conclusion that Sean thinks unfalsifiable theories can be scientific. Peter Woit, commenting on the related matter of Max Tegmark’s recent book, points out that this kind of talk from scientists can be fuel for crackpots and pseudoscientists who use it to appear more legitimate to laymen:

“If physicists like Tegmark succeed in publicizing and getting accepted as legitimate mainstream science their favorite completely empty, untestable `theory’, this threatens science in a very real way.”

Secondly, Sean claims that String Theory is at least in principle falsifiable, but if one takes the appropriate subtle view of falsifiability as he suggests, one must admit that `in principle’ falsifiability is rather a weak requirement. After all, the cheese-in-the-moon hypothesis is falsifiable in principle, as is the assertion that the world will end tomorrow. At best, Sean’s argument goes to show that we need other criterion than falsifiability to judge whether String Theory is scientific, but given the large number of free parameters in the theory, one wonders whether it won’t fall prey to something like the `David Deutsch principle‘, which says that a theory should not be too easy to modify retrospectively to fit the observed evidence.

While the core idea of falsifiability is here to stay, I agree with Scott Aaronson that remarkably little progress has been made since Popper on building upon this idea. For all their ability to criticise and deconstruct, the philosophers have not really been able to tell us what does make a theory scientific, if not merely falsifiability. Sean Carroll suggests considering whether a theory is `definite’, in that it makes clear statements about reality, and `empirical’ in that these statements can be plausibly linked to physical experiments. Perhaps the falsifiability of a claim should also be understood as relative to a prevailing paradigm (see Kuhn).

In certain extreme scenarios, one might also be able to make the case that the falsifiability of a statement is relative to the place of the scientists in the universe. For example, it is widely believed amongst physicists that no information can escape a black hole, except perhaps in a highly scrambled-up form, as radiated heat. But as one of my friends pointed out to me today, this seems to imply that certain statements about the interior of the black hole cannot ever be falsified by someone sitting outside the event horizon. Suppose we had a theory that there was a banana inside the black hole. To check the theory, we would likely need to send some kind of banana-probe (a monkey?) into the black hole and have it come out again — but that is impossible. The only way to falsify such a statement would be to enter the black hole ourselves, but then we would have no way of contacting our friends back home to tell them they were right or wrong about the banana. If every human being jumped into the black hole, the statement would indeed be falsifiable. But if exactly half of the population jumped in, is the statement falsifiable for them and not for anyone else? Could the falsifiability of a statement actually depend on one’s physical place in the universe? This would indeed be troubling, because it might mean there are statements about our universe that are in principle falsifiable by some hypothetical observer, but not by any of us humans. It becomes disturbingly similar to predictions about the afterlife – they can only be confirmed or falsified after death, and then you can’t return to tell anyone about it. Plus, if there is no afterlife, an atheist doesn’t even get to bask in the knowledge of being correct, because he is dead.

We might hope that statements about quasi-inaccessible regions of experience, like the insides of black holes or the contents of parallel universes, could still be falsified `indirectly’ in the same way that doing lab tests on ghosts might lend support to the idea of an afterlife (wouldn’t that be nice). But how indirect can our tests be before they become unscientific? These are the interesting questions to discuss! Perhaps physicists should try to add something more constructive to the debate instead of bickering over table-scraps left by philosophers.

[1] “A sentence (or a theory) is empirical-scientific if and only if it is falsifiable” Popper, Karl ([1989] 1994). “Falsifizierbarkeit, zwei Bedeutungen von”, pp. 82–86 in Helmut Seiffert and Gerard Radnitzky. (So there.)

[2] See the Stanford Encyclopedia of Awesomeness.

So can we time-travel or not?!

In a comment on my last post, elkement asked:

“What are exactly are the limits for having an object time-travel that is a bit larger than a single particle? Or what was the scope of your work? I am asking because papers as your thesis are very often hyped in popular media as `It has been proven that time-travel does work’ (Insert standard sci-fi picture of curved space here). As far as I can decode the underlying papers such models are mainly valid for single particles (?) but I have no feeling about numbers and dimensions, decoherence etc.”

Yep, that is pretty much THE question about time travel – can we do it with people or not? (Or even with rats, that would be good too). The bottom line is that we still don’t know, but I might as well give a longer answer, since it is just interesting enough to warrant its own blog post.

First of all, nobody has yet been able to prove that time travel is either possible or impossible according to the laws of physics. This is largely because we don’t yet know what laws govern time travel — for that we’d almost certainly need a theory of quantum gravity. In order for humans to time-travel, we would probably need to use a space-time wormhole, as proposed by Morris, Thorne and Yurtsever in the late eighties [1]. Their paper originated the classic wormhole graphic that everyone is so fond of:

(Used without permission from the publisher -- shh!)
(Used without permission from the publisher — shh!)

However, there are at least a couple of compelling arguments why it should be impossible to send people back in time, one of which is Stephen Hawking’s “Chronology Protection Conjecture”. This is commonly misrepresented as the argument “if time travel is possible, where are all the tourists from the future?”. While Stephen Hawking did make a comment along these lines, he was just joking around. Besides, there is a perfectly good reason why we might not have been visited by travellers from the future: according to the wormhole model, you can only go back in time as far as the moment when you first invented the time machine, or equivalently, the time at which the first wormhole mouth opens up. Since we haven’t found any wormhole entrances in space, nor have we created one artificially, it is no surprise that we haven’t received any visitors from the future.

via Saturday Morning Breakfast Cereal
via Saturday Morning Breakfast Cereal

The real Chronology Protection Conjecture involves a lot more mathematics and head-scratching. Basically, it says that matter and energy should accumulate near the wormhole entrance so quickly that the whole thing will collapse into a black hole before anybody has time to travel through it. The reason that it is still only a conjecture and has not been proven, is that it relies upon certain assumptions about quantum gravity that may or may not be true — we won’t know until we have such a theory. And then it might just turn out that the wormhole is somehow stable after all.

The other reason why time travel for large objects might be impossible is that, in order for the wormhole to be stable and not collapse in on itself Hawking-style, you need matter with certain quantum properties that can support the wormhole against collapse [2]. But it might turn out that it is just impossible to create enough of this special matter in the vicinity of a wormhole to keep it open. This is a question that one could hope to answer without needing a full theory of quantum gravity, because it depends only on the shape of the space-time and certain properties of quantum fields within that space-time. However, the task of answering this question is so ridiculously difficult mathematically that nobody has yet been able to do it. So the door is still open to the possibility of time-travelling humans, at least in theory.

To my mind, though, the biggest reason is not theoretical but practical: how the heck do you create a wormhole? We can’t even create a black hole of any decent size (if any had shown up at the LHC they would have been microscopic and very short-lived). So how can we hope to be able to manipulate the vast amounts of matter and energy required to bend space-time into a loop (and a stable loop no less), without annihilating ourselves in the process? Even if we were lucky to find a big enough, ready-made wormhole somewhere out in space, it will almost certainly be so far away as to make it nearly impossible to get there, due to sheer demands on technology. It’s a bit like asking, could humans ever build a friendly hotel in the centre of the sun? Well, it might be technically possible, but there is no way it would ever happen; even if we could raise humungous venture capital for the Centre-of-the-Sun Hotel, it would just be too damn hard.

The good news is that it might be more feasible to create a cute, miniature wormhole that only exists for a short time. This would require much smaller energies that might not destroy us in the process, and might be easier to manipulate and control (assuming quantum gravity allows it at all). So, while there is as yet no damning proof that time-travel is impossible, I still suspect that the best we can ever hope to do is to be able to send an electron back in time by a very short amount, probably not more than one millisecond — which would be exciting for science nerds, but perhaps not the headline that the newspapers would have wanted.

[1] Fun fact: while working on the movie  “Contact”, Carl Sagan consulted Kip Thorne about the physics of time-travel.

[2] For the nerds out there, you need matter that violates the averaged null energy condition (ANEC). You can look up what this means in any textbook on General Relativity — for example this one.