Category Archives: Anthropics

Bostrom beats the Boltzmann Brains

In a previous post I looked at Boddy and Carroll’s controversial claim that a cosmological model which predicts Boltzmann Brains is problematic. In essence, they claim that if a theory predicts the existence of many observers whose subjective experience is identical to yours, but which is based on random thermal fluctuations, then the theory is `cognitively unstable’ because (1) the laws of physics indicate there is a good chance you are one of those observers, but if you are, then (2) you can’t trust your reasons for believing those same laws that led you to that conclusion, because they appeared in your brain by random chance!

Personally, I am suspicious that there might be some kind of hidden flaw in this rather mind-bendy argument. So far, critics have argued that Carroll’s assertion of (1) is far from obvious, and that alternative assumptions exist that do not lead to a problem. However, apart from Mark Srednicki, everyone seems to think that their own choice of starting assumption is obviously correct. For my part, I really do not think that the question `could I have been born as somebody else?’ has an obvious answer that can be derived in the absence of philosophical considerations. It is one thing to ask, `would this ball fall to the ground if I dropped it?’ and quite another thing to ask `if my parents had conceived me one day earlier, would I be the same person’? One question is quite easily defined and answered by known science, the other not so much. Luckily, there is a man called Nick Bostrom who is much smarter than you and me and has spent much more time thinking about such things. Below is my own take on his take on the problem.

The Bearded Man Paradox

Imagine you have two competing theories of the universe. In both theories, the universe consists of three cells, walled off from each other. In each cell there materializes a bearded man, whose beard is either black or white. In this first theory, called TB, two men have black beards and one has a white beard. In the second theory, TW, two men have white beards and only one has a black beard. Now, suppose you have materialized as a black-bearded man in a cell. Given your subjective experience of having a black beard, which theory is more likely to be true?

Intuitively, we would tend to think this supports theory TB, which allows for more black-bearded observers. But this intuition rests on an assumption: namely, that we might have materialized as a white-bearded man. But to allow for this possibility means that we are including in our reference-class (i.e. the set of people that we might have been) some observers whose subjective experience is different than ours, namely, we are allowing that we might have had the experience of having a white beard, even though we in fact have a black one. We could instead argue that, since there exists at least one black-bearded man in both models, the knowledge that we have a black beard simply tells us that we exist within the relevant subset of observers in either model that have black beards. Indeed, if it was a given that we were always going to be one of the black-bearded observers in the model, then the fact of having a black beard tells us nothing about which model is the correct one! Just like in the Adam & Eve problem, it is an issue of assigning your`soul’ to possible bodies, which is indeed dependent on philosophical assumptions.

Bostrom argues that we should allow our reference class to include observers with slightly different subjective experiences. To Bostrom, for a black-bearded man to believe that he always had to have a black beard in any possible universe is absurd. However, even if you are not willing to go as far as that, you must wonder how exactly an observer’s experiences should match yours before you are willing to concede that you `might have been’ that observer. What if, instead of beards, the observers were identical except that one of them happened to get an itch on his elbow at a particular moment in time, whereas the others got an itch on their noses? How similar must your possible incarnations be, in order for you to consider them possible experiences for yourself?

Bostrom saves the day

Unlike Carroll, Bostrom considers the `problem’ of Boltzmann Brains to be the fact that they might lead us to reason `incorrectly’ in cases like the bearded men problem. But Bostrom’s judgement of `incorrect reasoning’ rests only on his intuitive feeling that the observation of our subjective experiences (like not being a Boltzmann Brain) should allow us to prefer models in which there are more observers having this property than not. This intuition is expressly not shared by Jacques Distler (nor, presumably, by Motl). Distler sees no reason to look for a model free from Boltzmann Brains, so long as the present model predicts the existence of at least one real Jacques Distler who is not a Boltzmann Brain. Since he had to end up as the real Jacques Distler in either scenario, the fact that he is himself and not a Boltzmann Brain gives him no further information to choose between models.

But if we agree with Bostrom for the moment, then we can rescue Boddy and Carroll: for then there is a perfectly good reason to prefer a BB-free model over one that contains BB’s, based just on probabilities, without needing to invoke `cognitive instability’. To reason correctly, Bostrom says we should include in our reference class of observers those whose subjective experience might have been slightly different to our actual experience (eg. that we might have had a white beard, even though we have a black one). But in the present case, this means including BBs whose fake past experiences might deviate somewhat from our actually observed experiences. If the deviations were consistent with the overall laws of physics observed in our experiences, this would not matter; but since Boltzmann Brain memories are random fluctuations, the overwhelming majority of BBs experiences are not exactly like ours, but are completely whacked-out and crazy. If you allow that you might have been one of the conscious BB’s with a completely scrambled-up subjective experience, then the fact that your life is remarkably ordered and consistent with the laws of physics is far better explained by a model that is completely free from BB’s than by a model that contains legions of BB’s. Thus, it appears Bostrom’s philosophical argument lends support to Boddy and Carroll’s conclusion, but by using an alternative argument to the possibly suspect notion of cognitive stability. Since I share Bostrom’s philosophical sentiment on this matter, I also agree with Carroll’s conclusion: a model without BB’s is better than a model with BB’s (provided one does not have to include any nasty ad-hoc elements to get rid of them). From this point of view, Boddy and Carroll’s work is well-motivated after all.

In fact, I’m even going to go out on a limb here and say that this work sets a precedent for using a philosophical argument as a constructive tool for building theoretical models. This is certainly not a new idea, but physicists nowadays seem to dismiss philosophy too quickly; they have forgotten how to make philosophy work for them in doing real physics. I hope the Boddy and Carroll paper encourages more physicists to look to philosophy for inspiration in the fine art of theory-building.


Halloween special: Boltzmann Brains

Author’s note: I wanted to wait before doing another post on anthropic reasoning, but this topic was just too good to pass up just after Halloween [1].

The Incredible Hercules #133

1: Are You A Disembodied Brain?

Our story begins with Ludwig Boltzmann’s thermodynamic solution to the arrow-of-time problem. The problem is to explain why the laws of physics at the microscopic scale appear to be reversible, but the laws as seen by us seem to follow a particular direction from past to future. Boltzmann argued that, provided the universe started in a low entropy state, the continual increase in entropy due to the second law of thermodynamics would explain the observed directionality of time. He thereby reduced the task to the lesser problem of explaining why the universe started in a low entropy state in the first place (incidentally, that is pretty much where things stand today, with some extra caveats). Boltzmann had his own explanation for this, too: he argued that if the universe were big enough, then even though it might be in a maximum entropy state, there would have to be random fluctuations in parts of the universe that would lead to local low-entropy states. Since human beings could only have come to exist in a low-entropy environment, we should not be surprised that our part of the universe started with low entropy, even though this is extremely unlikely within the overall model. Thus, one can use an observer-selection effect to explain the arrow of time.

Sadly, there is a crucial flaw in Boltzmann’s argument. Namely, that it doesn’t explain why we find ourselves in such a large region of low entropy as the observable universe. The conditions for conscious observers to exist could have occurred in a spontaneous fluctuation much smaller than the observable universe – so if the total universe was indeed very large and in thermal equilibrium, we should expect to find ourselves in just a small bubble of orderly space, outside of which is just featureless radiation, instead of the stars and planets that we actually do see. In fact, the overwhelming number of conscious observers in such a universe would just be disembodied brains that fluctuated into existence by pure chance. It is extremely unlikely that any of these brains would share the same experiences and memories as real people born and raised on Earth within a low-entropy patch of the universe, so Boltzmann’s argument seems to be unable to account for the fact that we do have experiences consistent with this scenario, rather than with the hypothesis that we are disembodied brains surrounded by thermal radiation.

Matters, as always, are not quite so simple. It is possible to rescue Boltzmann’s argument by the following rationale. Suppose I believe it possible that I could be a Boltzmann Brain. Clearly, my past experiences exhibit a level of coherence and order that is not typical of your average Boltzmann Brain. However, there is still some subset of Boltzmann Brains which, by pure chance, fluctuated into existence with an identical set of memories and past experiences encoded into their neurons so as to make their subjective experiences identical to mine. Even though they are a tiny fraction of all Boltzmann Brains, there are still vastly more of them than there are `really human’ versions of me that actually evolved within a large low-entropy sub-universe. Hence, conditional on my subjective experience thus far, I am still forced to conclude that I am overwhelmingly more likely to be a Boltzmann Brain, according to this theory.

2: Drama ensues

Quite recently, Sean Carroll wrote a paper (publicized on his blog) in which he and co-author Kim Boddy use the Higgs mechanism to “solve the problem of Boltzmann Brains” in cosmology. The setting is the  ΛCDM model of cosmology, which is a little different to Boltzmann’s model of the universe, but suffers a similar problem: in the case of ΛCDM, the universe keeps expanding forever, eventually reaching a maximum entropy state (aka “heat death”) and after which Boltzmann Brains have as much time as they need to fluctuate randomly out of the thermal noise. Such a model, argues Carroll, would imply that it is overwhelmingly likely that we are Boltzmann Brains.

Why is this a problem? Carroll puts it down to what he calls “cognitive instability”. Basically, the argument goes like this. Suppose you believe in a model of cosmology that has Boltzmann Brains. Then you should believe that you are most likely to be one of them. But this means that your reasons for believing in the model in the first place cannot be trusted, since they are not based on actual scientific evidence, but instead simply fluctuated into your brain at random. In essence, you are saying `based on the evidence, I believe that I am an entity that cannot believe in anything based on what it thinks is evidence’. A cognitively unstable theory therefore cannot both be true and be justified by observed evidence. Carroll’s solution to this problem is to reject the model in favor of one that doesn’t allow for the future existence of Boltzmann Brains.

Poor Carroll has taken a beating over at other blogs. Luboš Motl provided a lengthy response filled with the usual ad-hominems:

`…I really think that Carroll’s totally wrong reasoning is tightly linked to an ideology that blinds his eyes. As a hardcore leftist […] he believes in various forms of egalitarianism. Every “object” has the same probability.’

Jacques Distler wrote:

`…This is plainly nuts. How can a phase transition that may or may not take place, billions of years in the future, affect anything that we measure in the here-and-now? And, if it doesn’t affect anything in the present, why do I &#%@ care?’

Only Mark Srednicki seemed able to disagree with Carroll without taking the idea as a personal affront; his level-headed discussions with Carroll and Distler helped to clarify the issue significantly. Ultimately, Srednicki agrees with the conclusions of both Motl and Distler, but for slightly different reasons. The ensuing discussion can be summarized something like this:

Distler: A model of the universe does not allow you to make predictions by itself. You also need to supply a hypothesis about where we exist within the universe. And any hypothesis in which we are Boltzmann Brains is immediately refuted by empirical evidence, namely when we fail to evaporate into radiation in the next second.

Motl: Yep, I basically agree with Distler. Also, Boddy and Carroll are stupid Marxist idiots.

Srednicki: Hang on guys, it’s more subtle than that. Carroll is simply saying that, in light of the evidence, a cosmological model without Boltzmann Brains is better than one that has Boltzmann Brains in it. Whether this is true or not is a philosophical question, not something that is blindingly obvious.

Distler: Hmmpf! Well I think it is blindingly obvious that the presence or absence of Boltzmann Brains has no bearing on choosing between the two models. Your predictions for future events would be the same in both.

Srednicki: That’s only true if, within the Boltzmann Brain model, you choose a xerographic distribution that ensures you are a non-Boltzmann-brain. But the choice of xerographic distribution is a philosophical one.

Distler: I disagree – Bayesian theory says that you should choose the prior distribution that converges most quickly to the `correct’ distribution as defined by the model. In this case, it is the distribution that favours us not being Boltzmann Brains in the first place.

(Meanwhile, at Preposterous Universe…)

Carroll: I think that it is obvious that you should give an equal credence to yourself being any one of the observers in a model that have identical previous experiences as you. It follows that a model without Boltzmann Brains is better than a model with Boltzmann Brains due to cognitive instability.

Srednicki: Sorry Carroll – your claim is not at all obvious. It is a philosophical assumption that cannot be derived from any laws within the model. Under a different assumption, Boltzmann Brains aren’t a problem.

There is a potential flaw in the Distler/Motl argument: it rests on the premise that, if you are indeed a Boltzmann Brain, this can be taken as a highly falsifiable hypothesis which is falsified one second later when you fail to evaporate. But strictly speaking, that only rules out a subset of possible Boltzmann Brain hypotheses – there is still the hypothesis that you are a Boltzmann Brain whose experiences are indistinguishable from yours right up until the day you die, at which point they go `poof’, and the hypothesis that you are one of these brains is not falsifiable. Sure, there are vastly fewer Boltzmann Brains with this property, but in a sufficiently long-lived universe there are still vastly more of them than the `real you’. Thus, the real problem with Boltzmann Brains is not that they are immediately falsified by experience, but quite the opposite: they represent an unfalsifiable hypothesis. Of course, this also immediately resolves the problem: even if you were a Boltzmann Brain, your life will proceed as normal (by definition, you have restricted yourself to BB’s whose subjective experiences match those of a real person’s life), so this belief has no bearing on your decisions. In particular, your subjective experience gives you no reason to prefer one model to another just because the former contains Boltzmann Brains and the latter doesn’t. One thereby arrives at the same conclusion as Distler and Motl, but by a different route. However, this also means that Distler and Motl cannot claim that they are not Boltzmann Brains based on observed evidence, if one assumes a BB model. Either they think Boddy and Carroll’s proposed alternative is not a viable theory in it own right, or else they don’t think that a theory containing a vast number of unfalsifiable elements is any worse than a similar theory that doesn’t need such elements, which to me sounds absurd.

I think that Mark Srednicki basically has it right: the problem at hand is yet another example of anthropic reasoning, and the debate here is actually about how to choose an appropriate `reference class’ of observers. The reference class is basically the set of observers within the model that you think it is possible that you might have been. Do you think it is possible that you could have been born as somebody else? Do you think you might have been born at a different time in history? What about as an insect, or a bacterium? In a follow-up post, I’ll discuss an idea that lends support to Boddy and Carroll’s side of the argument.

[1] If you want to read about anthropic reasoning from somebody who actually knows what they are talking about, see Anthropic Bias by Nick Bostrom. Particularly relevant here is his discussion of `freak observers’.

The Adam and Eve Paradox

One of my favourite mind-bending topics is probability theory. It turns out that, for some reason, human beings are very bad at grasping how probability works. This is evident in many phenomena: why do we think the roulette wheel is more likely to come up black after a long string of reds? Why do people buy lottery tickets? Why is it so freakin’ hard to convince people to switch doors in the famous Monty Hall Dilemma?

Part of the problem is that we seem to think we understand probability much better than we actually do. This is why card sharks and dice players continue to make a living by swindling people who fall into common traps. Studying probability is one of the most humbling things a person can do. One area that has particular relevance to physics is the concept of anthropic reasoning. We base our decisions on prior knowledge that we possess. But it is not always obvious which prior knowledge is relevant to a given problem. There may be some cases where the mere knowledge that you exist — in this time, as yourself – might conceivably tell you something useful.

The anthropic argument in cosmology and physics is the proposal that some observed facts about the universe can be explained simply by the fact that we exist. For example, we might wonder why the cosmological constant is so small. In 1987, Steven Weinberg argued that if it were any bigger, it would not have been possible for life to evolve in the universe —  hence, the mere fact that we exist implies that the value of the constant is below a certain limit. However, one has to be extremely careful about invoking such principles, as we will see.

This blog post is likely to be the first among many, in which I meditate on the subtleties of probability. Today, I’d like to look at an old chestnut that goes by many names, but often appears in the form of the `Adam and Eve’ paradox.

(Kunsthistoriches Wien)
Spranger – Adam and Eve

Adam finds himself to be the first human being. While he is waiting around for Eve to turn up, he is naturally very bored. He fishes around in his pocket for a coin. Just for a laugh, he decides that if the coin comes up heads, he will refuse to procreate with Eve, thereby dooming the rest of the human race to non-existence (Adam has a sick sense of humour). However, if the coin comes up tails, he will conceive with Eve as planned and start the chain of events leading to the rest of humanity.

Now Adam reasons as follows: `Either the future holds a large number of my future progeny, or it holds nobody else besides myself and Eve. If indeed it holds many humans, then it is vastly more likely that I should have been born as one of them, instead of finding myself rather co-incidentally in the body of the first human. On the other hand, if there are only ever going to be two people, then it is quite reasonable that I should find myself to be the first one of them. Therefore, given that I already find myself in the body of the first human being, the coin is overwhelmingly likely to come up heads when I flip it.’ Is Adam’s reasoning correct? What is probability of the coin coming up heads?

As with many problems of a similar ilk, this one creates confusion by leaving out certain crucial details that are needed in order to calculate the probability. Because of the sneaky phrasing of the problem, however, people often don’t notice that anything is missing – they bring along their own assumptions about what these details ought to be, and are then surprised when someone with different assumptions ends up with a different probability, using just as good a logical argument.

Any well-posed problem has an unambiguous answer. For example, suppose I tell you that there is a bag of 35 marbles, 15 of which are red and the rest blue. This information is now sufficient to state the probability that a marble taken from the bag is red. But suppose I told you the same problem, without specifying the total number of marbles in the bag. So you know that 15 are red, but there could be any number of additional blue marbles. In order to figure out the probability of getting a red marble, you first have to guess how many blue marbles there are, and in this case (assuming the bag can be infinitely large) a guess of 20 is as good as a guess of 20000, but the probability of drawing a red marble is quite different in each case. Basically, two different rational people might come up with completely different answers to the question because they made different guesses, but neither would be any more or less correct than the other person: without additional information, the answer is ambiguous.

In the case of Adam’s coin, the answer depends on things like: how do souls get assigned to bodies? Do you start with one soul for every human who will ever live and then distribute them randomly? If so, then doesn’t this imply that certain facts about the future are pre-determined, such as Adam’s decision whether or not to procreate? We will now see how it is possible to choose two different contexts such that in one case, Adam is correct, and in the other case he is wrong. But just to avoid questions of theological preference, we will rephrase the problem in terms of a more real-world scenario: actors auditioning for a play.

Imagine a large number of actors auditioning for the parts in the Play of Life. Their roles have not yet been assigned. The problem is that the director has not yet decided which version of the play he wishes to run. In one version, he only needs two actors, while in the other version there is a role for every applicant.

In the first version of the play, the lead actor flips a coin and it comes up heads (the coin is a specially designed stage-prop that is weighted to always come up heads). The lead actress then joins the lead actor onstage, and no more characters are required. In the second version of the play, the coin is rigged to come up tails, and immediately afterwards a whole ensemble of characters comes onto the scene, one for every available actor.

The director wishes to make his decision without potentially angering the vast number of actors who might not get a part. Therefore he decides to use an unconventional (and probably illegal) method of auditioning. First, he puts all of the prospective actors to sleep; then he decides by whatever means he pleases which version of the play to run. If it is the first version, he randomly assigns the roles of the two lead characters and has them dressed up in the appropriate costumes. As for all the other actors who didn’t get a part, he has them loaded into taxis and sent home with an apologetic letter. If he decides on the second version of the play, then he assigns all of the roles randomly and has the actors dressed up in the costumes of their characters, ready to go onstage when they wake up.

Now imagine that you are one of the actors, and you are fully aware of the director’s plan, but you do not know which version of the play he is going to run. After being put to sleep, you wake up some time later dressed in the clothing of the lead role, Adam. You stumble on stage for the opening act, involving you flipping a coin. Of course, you know coin is rigged to either land heads or tails depending on which version of the play the director has chosen to run. Now you can ask yourself what the probability is that the coin will land heads, given that you have been assigned the role of Adam. In this case, hopefully you can convince yourself with a bit of thought that your being chosen as Adam does not give you any information about the director’s choice. So guessing that the coin will come up heads is equally justified as guessing that it will come up tails.

Let us now imagine a slight variation in the process. Suppose that, just before putting everyone to sleep, the director takes you aside and confides in you that he thinks you would make an excellent Adam. He likes you so much, in fact, that he has specially pre-assigned you the role of Adam in the case that he runs the two-person version of the play. However, he feels that in the many-character version of the play it would be too unfair not to give one of the other actors a chance at the lead, so in that case he intends to cast the role randomly as usual.

Given this extra information, you should now be much less surprised at waking up to find yourself in Adam’s costume. Indeed, your lack of surprise is due to the fact that your waking up in this role is a strong indication that the director went with his first choice – to run the two-person version of the play. You can therefore predict with confidence that your coin is rigged to land heads, and that the other actors are most probably safely on their way home with apologetic notes in their jacket pockets.

What is the moral of this story? Be suspicious of any hypothetical scenario whose answer depends on mysterious unstated assumptions about how souls are assigned to bodies, whether the universe is deterministic, etc. Different choices of the process by which you find yourself in one situation or another will affect the extent to which your own existence informs your assignation of probabilities. Specifying these details means asking the question: what process determines the state of existence in which I find myself? If you want to reason about counterfactual scenarios in which you might have been someone else, or not existed at all, then you must first specify a clear model of how such states of existence come about. Without that information, you cannot reliably invoke your own existence as an aid to calculating probabilities.