Category Archives: Wild speculation

Bootstrapping to quantum gravity

Kepler

“If … there were no solid bodies in nature there would be no geometry.”
-Poincaré

A while ago, I discussed the mystery of why matter should be the source of gravity. To date, this remains simply an empirical fact. The deep insight of general relativity – that gravity is the geometry of space and time – only provides us with a modern twist: why should matter dictate the geometry of space-time?

There is a possible answer, but it requires us to understand space-time in a different way: as an abstraction that is derived from the properties of matter itself. Under this interpretation, it is perfectly natural that matter should affect space-time geometry, because space-time is not simply a stage against which matter dances, but is fundamentally dependent on matter for its existence. I will elaborate on this idea and explain how it leads to a new avenue of approach to quantum gravity.

First consider what we mean when we talk about space and time. We can judge how far away a train is by listening to the tracks, or gauge how deep a well is by dropping a stone in and waiting to hear the echo. We can tell a mountain is far away just by looking at it, and that the cat is nearby by tripping over it. In all these examples, an interaction is necessary between myself and the object, sometimes through an intermediary (the light reflected off the mountain into my eyes) and sometimes not (tripping over the cat). Things can also be far away in time. I obviously cannot interact with people who lived in the past (unless I have a time machine), or people who have yet to be born, even if they stood (or will stand) exactly where I am standing now. I cannot easily talk to my father when he was my age, but I can almost do it, just by talking to him now and asking him to remember his past self. When we say that something is far away in either space or time, what we really mean is that it is hard to interact with, and this difficulty of interaction has certain universal qualities that we give the names `distance’ and `time’.
It is worth mentioning here, as an aside, that in a certain sense, the properties of `time’ can be reduced to properties of `distance’ alone. Consider, for instance, that most of our interactions can be reduced to measurements of distances of things from us, at a given time. To know the time, I invariably look at the distance the minute hand has traversed along its cycle on the face of my watch. Our clocks are just systems with `internal’ distances, and it is the varying correspondence of these `clock distances’ with the distances of other things that we call the `time’. Indeed, Julian Barbour has developed this idea into a whole research program in which dynamics is fundamentally spatial, called Shape Dynamics.

Sigmund Freud Museum, Wien – Peter Kogler

So, if distance and time is just a way of describing certain properties of matter, what is the thing we call space-time?

We now arrive at a crucial point that has been stressed by philosopher Harvey Brown: the rigid rods and clocks with which we claim to measure space-time do not really measure it, in the traditional sense of the word `measure’. A measurement implies an interaction, and to measure space-time would be to grant space-time the same status as a physical body that can be interacted with. (To be sure, this is exactly how many people do wish to interpret space-time; see for instance space-time substantivalism and ontological structural realism).

Brown writes:
“One of Bell’s professed aims in his 1976 paper on `How to teach relativity’ was to fend off `premature philosophizing about space and time’. He hoped to achieve this by demonstrating with an appropriate model that a moving rod contracts, and a moving clock dilates, because of how it is made up and not because of the nature of its spatio-temporal environment. Bell was surely right. Indeed, if it is the structure of the background spacetime that accounts for the phenomenon, by what mechanism is the rod or clock informed as to what this structure is? How does this material object get to know which type of space-time — Galilean or Minkowskian, say — it is immersed in?” [1]

I claim that rods and clocks do not measure space-time, they embody space-time. Space-time is an idealized description of how material rods and clocks interact with other matter. This distinction is important because it has implications for quantum gravity. If we adopt the more popular view that space-time is an independently existing ontological construct, it stands to reason that, like other classical fields, we should attempt to directly quantise the space-time field. This is the approach adopted in Loop Quantum Gravity and extolled by Rovelli:

“Physical reality is now described as a complex interacting ensemble of entities (fields), the location of which is only meaningful with respect to one another. The relation among dynamical entities of being contiguous … is the foundation of the space-time structure. Among these various entities, there is one, the gravitational field, which interacts with every other one and thus determines the relative motion of the individual components of every object we want to use as rod or clock. Because of that, it admits a metrical interpretation.” [2]

One of the advantages of this point of view is that it dissolves some seemingly paradoxical features of general relativity, such as the fact that geometry can exist without (non-gravitational) matter, or the fact that geometry can carry energy and momentum. Since gravity is a field in its own right, it doesn’t depend on the other fields for its existence, nor is there any problem with it being able to carry energy. On the other hand, this point of view tempts us into framing quantum gravity as the mathematical problem of quantising the gravitational field. This, I think, is misguided.

I propose instead to return to a more Machian viewpoint, according to which space-time is contingent on (and not independent of) the existence of matter. Now the description of quantum space-time should follow, in principle, from an appropriate description of quantum matter, i.e. of quantum rods and clocks. From this perspective, the challenge of quantum gravity is to rebuild space-time from the ground up — to carry out Einstein’s revolution a second time over, but using quantum material as the building blocks.

Ernst Mach vs. Max Ernst. Get it right, folks.

My view about space-time can be seen as a kind of `pulling oneself up by one’s bootstraps’, or a Wittgenstein’s ladder (in which one climbs to the top of a ladder and then throws the ladder away). It works like this:
Step 1: define the properties of space-time according to the behaviour of rods and clocks.
Step 2: look for universal patterns or symmetries among these rods and clocks.
Step 3: take the ideal form of this symmetry and promote it to an independently existing object called `space-time’.
Step 4: Having liberated space-time from the material objects from which it was conceived, use it as the independent standard against which to compare rods and clocks.

Seen in this light, the idea of judging a rod or a clock by its ability to measure space or time is a convenient illusion: in fact we are testing real rods and clocks against what is essentially an embodiment of their own Platonic ideals, which are in turn conceived as the forms which give the laws of physics their most elegant expression. A pertinent example, much used by Julian Barbour, is Ephemeris time and the notion of a `good clock’. First, by using material bodies like pendulums and planets to serve as clocks, we find that the motions of material bodies approximately conform to Newton’s laws of mechanics and gravitation. We then make a metaphysical leap and declare the laws to be exactly true, and the inaccuracies to be due to imperfections in the clocks used to collect the data. This leads to the definition of the `Ephemeris time’, the time relative to which the planetary motions conform most closely to Newton’s laws, and a `good clock’ is then defined to be a clock whose time is closest to Ephemeris time.

The same thing happens in making the leap to special relativity. Einstein observed that, in light of Maxwell’s theory of electromagnetism, the empirical law of the relativity of motion seemed to have only a limited validity in nature. That is, assuming no changes to the behaviour of rods and clocks used to make measurements, it would not be possible to establish the law of the relativity of motion for electrodynamic bodies. Einstein made a metaphysical leap: he decided to upgrade this law to the universal Principle of Relativity, and to interpret its apparent inapplicability to electromagnetism as the failure of the rods and clocks used to test its validity. By constructing new rods and clocks that incorporated electromagnetism in the form of hypothetical light beams bouncing between mirrors, Einstein rebuilt space-time so as to give the laws of physics a more elegant form, in which the Relativity Principle is valid in the same regime as Maxwell’s equations.

Ladder for Booker T. Washington – Martin Puryear

By now, you can guess how I will interpret the step to general relativity. Empirical observations seem to suggest a (local) equivalence between a uniformly accelerated lab and a stationary lab in a gravitational field. However, as long as we consider `ideal’ clocks to conform to flat Minkowski space-time, we have to regard the time-dilated clocks of a gravitationally affected observer as being faulty. The empirical fact that observers stationary in a gravitational field cannot distinguish themselves (locally) from uniformly accelerated observers then seems accidental; there appears no reason why an observer could not locally detect the presence of gravity by comparing his normal clock to an `ideal clock’ that is somehow protected from gravity. On the other hand, if we raise this empirical indistinguishability to a matter of principle – the Einstein Equivalence Principle – we must conclude that time dilation should be incorporated into the very definition of an `ideal’ clock, and similarly with the gravitational effects on rods. Once the ideal rods and clocks are updated to include gravitational effects as part of their constitution (and not an interfering external force) they give rise to a geometry that is curved. Most magically of all, if we choose the simplest way to couple this geometry to matter (the Einstein Field Equations), we find that there is no need for a gravitational force at all: bodies follow the paths dictated by gravity simply because these are now the inertial paths followed by freely moving bodies in the curved space-time. Thus, gravity can be entirely replaced by geometry of space-time.

As we can see from the above examples, each revolution in our idea of space-time was achieved by reconsidering the nature of rods and clocks, so as to make the laws of physics take a more elegant form by incorporating some new physical principle (eg. the Relativity and Equivalence principles). What is remarkable is that this method does not require us to go all the way back to the fundamental properties of matter, prior to space-time, and derive everything again from scratch (the constructive theory approach). Instead, we can start from a previously existing conception of space-time and then upgrade it by modifying its primary elements (rods and clocks) to incorporate some new principle as part of physical law (the principle theory approach). The question is, will quantum gravity let us get away with the same trick?

I’m betting that it will. The challenge is to identify the empirical principle (or principles) that embody quantum mechanics, and upgrade them to universal principles by incorporating them into the very conception of the rods and clocks out of which general relativistic space-time is made. The result will be, hopefully, a picture of quantum geometry that retains a clear operational interpretation. Perhaps even Percy Bridgman, who dismissed the Planck length as being of “no significance whatever” [3] due to its empirical inaccessibility, would approve.

Boots with laces – Van Gogh

[1] Brown, Physical Relativity, p8.
[2] Rovelli, `Halfway through the woods: contemporary research on space and time’, in The Cosmos of Science, p194.
[3] Bridgman, Dimensional Analysis, p101.

Advertisements

Science, psychoanalyzed

“The problem for us is not, are our desires satisfied or not? The problem is, how do we know what we desire?”

-Slavoj Žižek

The most fundamental dramatic tension is the tension between the divided self. We have all on occasion experienced an internal dialogue like the following: `I ate the cookie despite myself. I knew it was wrong, but I couldn’t help myself. Afterwards, I hated myself’. On one hand, this dialogue makes sense to us and its meaning seems clear; on the other hand, it makes no sense without a division of the self. Who is the myself against whose wishes I eat the cookie? Who is the I that could not help myself? Who, afterwards, is hated, and who is the hater? To admit that the self can be both the subject and object of an action is equivalent to admitting that the self is divided.

Let us therefore deliver ourselves into the hands of Freud, who will lead us down a rabbit-hole of self-discovery. Who are these characters, the id, ego and superego? The id is the instinctive, reactive, animalistic part of the mind. It expresses emotion without reflection, it is wordless, mute, free of morals, shame or self-consciousness. The superego is the embodiment of laws and limitations. When the child learns that it is separate from the world, confined to a small, weak body and cannot have everything it wants – when it learns that it is at the mercy of beings far more powerful who dictate its life – it internalises these limitations and laws by creating the superego. The superego tells us what we are not allowed to do, where we cannot go, and what is forbidden by physical, moral or societal laws.

Freud

The fundamental tension between superego and id demands a mediator to decide whether to go with the desires of the id or follow the rules of the superego. This mediator, haplessly caught between the two, is our hero, ourselves: the ego. When the ego obeys the superego, the id is suppressed and frustrated, while the ego becomes more powerful and more strict in its demands. When the ego obeys the id instead, the satisfaction is short-lived, for the id knows only the present moment, and is hungry again no sooner it is fed. Meanwhile, the superego brings its vengeance on the ego for the transgression, afflicting it with guilt and feelings of inferiority. The id expresses our desires and fears, the superego expresses our judgements, and the ego determines how we respond in our actions. Before reading the end of this paragraph, take a moment to re-read the dialogue about the cookie and try to name the actors and the victims. Did you do it? The id wanted to eat the cookie, the superego knew it was wrong, and the ego ate it. The superego was helpless to stop the ego, but afterwards, it hated the ego, and punished it with feelings of guilt. Now it makes sense.

MBros

Humans have a curious obsession with the number three. There are three wise men, the holy trinity, the `third eye’ of Hinduism. Dramatic tension between fictional characters also frequently relies on combinations of three. It is an entertaining exercise (but not always fruitful) to identify the roles of id, ego and superego in famous triplets from mythology and fiction. Here is a puzzle for you. In Brisbane, I used to frequent a coffee house called Three Monkeys. Inside, they had amassed a collection of depictions and statuettes of the `Three Wise Monkeys’, a mystical image originating from Japan in which the first monkey has covered its eyes, the second its ears, and the last one its mouth. The image is typically associated with the maxim: see no evil, hear no evil, speak no evil, thought to originate from a similar passage in the Chinese Analects of Confucius. The puzzle is this: if the monkeys were to represent the different aspects of the divided self, which monkey is the id, which is the ego and which is the superego? Or does the comparison simply fail? My own answer is given at the end of this essay.

3monks

Tension is by nature unsustainable. It must eventually resolve itself in one of three ways: destruction, reconciliation, or transformation into a new kind of tension (which just means the destruction of some things and the reconciliation of others). Destruction can occur when the division between the id and superego is too extreme, tearing apart the ego with opposing forces. Since the ego exists only to mediate the conflict between the other two, a reconciliation of the id with the superego automatically conciliates the ego as well. This represents a dissolution of the ego, meaning a loss of the distinction between the self and the external world: the attainment of Nirvana in the eastern philosophies. In reality, however, most of us experience only a very small and partial conciliation of this type, a sort of secret collaboration between the superego and the id. This secret collaboration is at the core of science, so let us examine it in more detail.

The easiest way to appreciate the perverse but necessary collaboration between superego and id is to look at stories and films. There, the characters are nicely separated into roles that often reflect the roles of our divided selves. Take Batman and the Joker as depicted in Christopher Nolan’s film, The Dark Knight. The Joker is obviously a candidate for the id:

The Dark Knight
“Do I really look like a guy with a plan? You know what I am? I’m a dog chasing cars. I wouldn’t know what to do with one if I caught it. You know, I just… do things.”

Batman, although a vigilante, is a good fit for the superego: he is the true enforcer of law, both the judge and the executioner. In fact it is the police force, embodied by Commissioner Gordon, that best represents the ego in its unenviable position, caught between the two rogue elements. Given these roles, we finally understand this brilliant exchange:
Batman: Then why do you want to kill me?
Joker: I don’t want to kill you! What would I do without you? Go back to ripping off mob dealers? No, no, NO! No. You… you complete me.
You could not ask for a more perfect exposition of the mutual dependence of the superego and the id.

Sometimes the bond is more subtle. Consider one of fiction’s greatest characters: Sherlock Holmes. Not coincidentally, Holmes is a poster boy for scientists, with his strict adherence to a method based on evidence, reasoning and deduction. Quite obviously, he is a manifestation of the superego, leaving Watson to carry the banner of the ego. He wears it well enough, constantly being lectured and berated by Holmes, occasionally skeptical and rebellious but always respectful of Holmes’ superior judgement. Where, then, could the id be hiding? Therein lies a profound mystery, worthy of Holmes himself! One is tempted to point at Moriarty, the great enemy of Holmes – but the shoe does not fit. In Moriarty one finds exactly the kind of characteristics more typical of the superego: self-confidence verging on megalomania, mercilessness, a strict adherence to methodology. He is more like Holmes’s evil twin – the vindictive, cruel side of the superego – than the impulsive and chaotic id.

Holmes

My own theory is that Holmes is a much more subtle character than he first appears. Who is the Holmes that we find, lost in a wordless reverie, playing the violin? Who is the Holmes that disguises himself to play a prank on poor Watson – the Holmes who, indeed, delights in upsetting Watson with eccentric and erratic behaviour? Who is the Holmes that goes missing for days, only to be found curled up in a den of iniquity, his eyes clouded with Opium? I contend that Holmes has an instinctive, intuitive and sensitive side that embodies the id, working in harmony with his superego aspect. Indeed, the seedy side of Holmes – his indulgent, drug-taking, reckless aspect – is somehow essential to completing the portrait of his genius. We would not find him so credible, so impressive, so almost mystical in his virtuosity if it were not for this dark side.

The superego and id can indeed collaborate, but it is usually only in a secretive, almost illicit way as though neither can admit that it depends on the other. The superego turns a blind eye, allowing the id to run wild, and then acts surprised and disappointed when it discovers the transgression. Then ensues what is in essence a sadomasochistic mock-punishment, since the id secretly enjoys the flogging, and the superego knows it, but plays along. In short, the union between superego and id is possible through the hypocritical self-awareness of both parties that they depend on each other to exist. They throw themselves into their respective roles with even more gusto, maintaining as it were a secret conspiracy against the ego, keeping up the tension but with a knowing cynicism.

JekyllHyde
We now begin to see the first inklings of the mad scientist. The quintessential mad scientist is Dr. Jekyll and Mr. Hyde, whose two faces represent unmistakably a perverse union of superego and id; other examples in fiction abound. The mad scientist is in fact the manifestation in an individual character of the public’s view of scientific activity in general. Since (as Kuhn tells us) science is a human activity, its attributes can be traced to attributes of the human mind. In other words, science as an institution can be psychoanalyzed.
Science is defined on one hand by its rationality, its strict adherence to method, zero tolerance for transgression of its rules, and a claim to superiority in its judgements and conclusions about the world. On the other hand, science is a powerful vehicle for the realisation of our (human) fantasies: what technology is not born from the dream of a science-fiction nerd? Technology is transgressive in the same way that dreams are transgressive: there is no taboo in science, no political correctness, no boundaries. At its purest, science and technology is obscene, disturbing and visionary all at once. Medicine is born of the desire to be immortal, chemistry is born of our desire to have power over the substances and forces of the world, to make gold and riches from lead; physics is born of our desire to fly through the sky like a bird, to be invisible, telepathic, omnipotent. Biology promises us the power to make animals and other organisms serve our needs, and psychology offers us power over each other. Science, with all of its adherence to evidence, logic and deduction, remains silent on matters of its purpose, has nothing to suggest about the ends to which it should be used. There lies hidden the id of science: an amoral, primitive, instinctive drive of humanity, just like the indignant infant trying to come to terms with the world. Without an effective intermediary in the form of public discussion and deliberation over scientific advances, science risks becoming a Sherlock without a Watson, that is, a Dr. Jekyll and Mr. Hyde.

Of course, just as it does in the individual’s psyche, the scientific id also plays a beneficial role: it supplies the creative drive and aesthetic sensibility without which science would be impossible. This is why we cannot divorce the id from the superego in science without destroying science altogether. Eliminate the id from Science, and you are left with a stagnant dogma; eliminate the superego, the methodology and tools of rational inquiry, and you are left with mysticism and superstition. The philosophy of science does an injustice to the true mechanism of scientific progress by focusing too much on the methodology – how to evaluate evidence and test hypotheses – and neglecting to address the aesthetic side of science.

Rick and Morty
“Sometimes science is more art than science. A lot of people don’t get that.”

How do we generate hypotheses? Where do ideas come from? Scientists themselves often don’t acknowledge the role that instinct and intuition plays in proposing new theories – we tend to downplay it, or insist that science progresses without any creative input. If that were really true, computer programs could do science in the foreseeable future. But most of us consider the revolution of the machines to still be far away, for the simple reason that we don’t yet know how to teach computers to be creative and to select `good’ hypotheses from the vast pool of logically possible hypotheses. This is (so far) a uniquely human ability, which has everything to do with gut feelings, impulsive thoughts and secret desires. The philosophy of science would perhaps benefit greatly from a more careful examination of this hidden aspect of scientific progress.

My answer to the three monkey’s question is this: The monkey who cannot speak is the id, because the id is voiceless. That leaves the blind monkey and the deaf monkey. It boils down to a matter of opinion here, but the argument that appeals to me most is this one: the superego has a closer relationship with the id than the ego does. Since the blind monkey can neither see nor hear the id (because the id can’t talk), but the deaf monkey can at least see the id, it stands to reason that the deaf monkey is the superego and the blind monkey is the ego.

marx3monk

Why does matter curve space and time?

This is one of those questions that has always bugged me.
black-hole
Suppose that, somewhere in the universe, there is a very large closed box made out of some kind of heavy, neutral matter. Inside this box a civilisation of intelligent creatures have evolved. They are made out of normal matter like you and me, except that for some reason they are very light — their bodies do not contain much matter at all. What’s more, there are no other heavy bodies or planets inside this large box aside from the population of aliens, whose total mass is too small to have any noticeable effect on the gravitational field. Thus, the only gravitational field that the aliens are aware of is the field created by the box itself (I’m assuming there are no other massive bodies near to the box).

Setting aside the obvious questions about how these aliens came to exist without an energy source like the sun, and where the heck the giant box came from, I want to examine the following question: in principle, is there any way that these aliens could figure out that matter is the source of gravitational fields?

Now, to make it interesting, let us assume the density of the box is not uniform, so there are some parts of its walls that have a stronger gravitational pull than others. Our aliens can walk around on these parts of the walls, and in some parts the aliens even become too heavy to support their own weight and get stuck until someone rescues them. Elsewhere, the walls of the box are low density and so the gravitational attraction to them is very weak. Here, the aliens can easily jump off and float away from the wall. Indeed, the aliens spend much of their time floating freely near the center of the box where the gravitational fields are weak. Apart from that, the composition of the box itself does not change with time and the box is not rotating, so the aliens are quickly able to map out the constant gravitational field that surrounds them inside the box, with its strong and weak points.

Like us, the aliens have developed technology to manipulate the electromagnetic field, and they know that it is the electromagnetic forces that keeps their bodies intact and stops matter from passing through itself. More importantly, they can accelerate objects of different masses by pushing on them, or applying an electric force to charged test bodies, so they quickly discover that matter has inertia, measured by its mass. In this way, they are able to discover Newton’s laws of mechanics. In addition, their experiments with electromagnetism and light eventually lead them to upgrade their picture of space-time, and their Newtonian mechanics is replaced by special relativistic mechanics and Maxwell’s equations for the electromagnetic field.

So far, so good! Except that, because they do not observe any orbiting planets or moving gravitating bodies (their own bodies being too light to produce any noticible attractive forces), they still have not reproduced Newtonian gravity. They know that there is a static field permeating space-time, called the gravitational field, that seems to be fixed to the frame of the box — but they have no reason to think that this gravitational force originates from matter. Indeed, there are two philosophical schools of thought on this. The first group holds that the gravitational field is to be thought of analogously to the electromagnetic field, and is therefore sourced by special “gravitational charges”. It was originally claimed that the material of the box itself carries gravitational charge, but scrapings of the material from the box revealed it to be the same kind of matter from which the aliens themselves were composed (let’s say Carbon) and the scrapings themselves seemed not to produce any gravitational fields, even when collected together in large amounts of several kilograms (a truly humungous weight to the minds of the aliens, whose entire population combined would only weigh ten kilograms). Some aliens pointed out that the gravitational charge of Carbon might be extremely weak, and since the mass of the entire box was likely to be many orders of magnitude larger than anything they had experienced before, it is possible that its cumulative charge would be enough to produce the field. However, these aliens were criticised for making ad-hoc modifications to their theory to avoid its obvious refutation by the kilograms-of-Carbon experiments. If gravity is analogous to the electromagnetic force — they were asked with a sneer — then why should it be so much weaker than electromagnetism? It seemed rather too convenient.

Some people suggested that the true gravitational charge was not Carbon, but some other material that coated the outside of the box. However, these people were derided even more severely than were the Carbon Gravitists (as they had become known). Instead, the popular scientific consensus shifted to a modern idea in which the gravitational force was considered to be a special kind of force field that simply had no source charges. It was a God-given field whose origin and patterns were not to be questioned but simply accepted, much like the very existence of the Great Box itself. This following gained great support when someone made a great discovery: the gravitational force could be regarded as the very geometry of spacetime itself.

The motivation for this was the peculiar observation, long known but never explained, that massive bodies always had the same acceleration in the gravitational field regardless of their different masses. A single alien falling towards one of the gravitating walls of the box would keep speed perfectly with a group of a hundred Aliens tied together, despite their clearly different masses. This dealt a crushing blow to the remnants of the Carbon Gravitists, for it implied that the gravitational charge of matter was exactly proportional to its inertial mass. This coincidence had no precedent in electromagnetism, where it was known that bodies of the same mass could have very different electric charges.

Under the new school of thought, the gravitational force was reinterpreted as the background geometry of space-time inside the box, which specified the inertial trajectories of all massive bodies. Hence, the gravitational force was not a force at all, so it was meaningless to ascribe a “gravitational charge” to matter. Tensor calculus was developed as a natural extension of special relativity, and the aliens derived the geodesic equation describing the motion of matter in a fixed curved space-time metric. The metric of the box was mapped out with high precision, and all questions about the universe seemed to have been settled.

Well, almost all. Some troublesome philosophers continued to insist that there should be some kind of connection between space-time geometry and matter. They wanted more than just the well-known description of how geometry caused matter to move: they tried to argue that matter should also tell space-time how to curve.

“Our entire population combined only weighs a fraction of the mass of the box. What would happen if there were more matter available to us? What if we did the Carbon-kilogram experiment again, but with 100 kilograms? Or a million? Surely the presence of such a large amount of matter would have an effect on space-time itself?”

But these philosophers were just laughed at. Why should any amount of matter affect the eternal and never-changing space-time geometry? Even if the Great Box itself were removed, the prevailing thought was that the gravitational field would remain, fixed as it was in space-time and not to any material source. So they all lived happily ever after, in blissful ignorance of the gravitational constant G, planetary orbits, and other such fantasies.

***

Did you find this fairytale disturbing? I did. It illustrates what I think is an under-appreciated uncomfortable feature of our best theories of gravity: they all take the fact that matter generates gravity as a premise, without justification apart from empirical observation. There’s nothing strictly wrong with this — we do essentially the same thing in special relativity when we take the speed of light to be constant regardless of the motion of its source, historically an empirically determined fact (and one that was found quite surprising).

However, there is a slight difference: one can in principle argue that the speed of light should be reference-frame independent from philosophical grounds, without appealing to empirical observations. Roughly, the relativity principle states that the laws of physics should be the same in all frames of motion, and from among the laws of physics we can include the non-relativistic equations of the electromagnetic field, from which the constant speed of light can be derived from the electric and magnetic constants of the vacuum. As far as I know, there is no similar philosophical grounding for the connection between matter and geometry as embodied by the gravitational constant, and hence no compelling reason for our hypothetical aliens to ever believe that matter is the source of space-time geometry.

Could it be that there is an essential piece missing from our accounts of the connection between matter and space-time? Or are our aliens are doomed by their unfortunately contrived situation, never to deduce the complete laws of the universe?

Skin Deep, by Xetobyte
Image Credit: Xetobyte

 

Danny Greenberger on AI

Physicist Danny Greenberger — perhaps best known for his classic work with Horne and Zeilinger in which they introduced the “GHZ” state to quantum mechanics — has a whimsical and provocative post over at the Vienna Quantum Cafe about creation myths and Artificial Intelligence.

The theme of creation is appropriate, since the contribution marks the debut of the Vienna blog, an initiative of the Institute of Quantum Optics and Quantum Information (incidentally, my current place of employment). Apart from drumming up some press for them, I wanted to elaborate on some of Greenberger’s interesting and dare I say outrageous ideas about what is means for a computer to think, and what it has to do with mankind’s biblical fall from grace.

For me, the core of Greenberger’s post is the observation that the Turing Test for artificial intelligence may not be as meaningful as we would like. Alan Turing, who basically founded the theory of computing, proposed the test in an attempt to pin down what it means for a computer to become `sentient’. The problem is, the definition of sentience and intelligence is already vague and controversial in living organisms, so it seems hopeless to find such a definition for a computer that everyone could agree upon. Turing’s ingenious solution was not to ask whether a computer is sentient in some objective way, but whether it could fool a human into thinking that it is also human; for example, by having a conversation over e-mail. Thus, a computer can be said to be sentient if, in a given setting, it is indistinguishable from a human for all practical purposes. The Turing test thereby takes a metaphysical problem and turns it into an operational one.

Turing’s Test is not without its own limitations and ambiguities. What situation is most appropriate for comparing a computer to a human? On one hand, a face-to-face interaction seems too demanding on the computer, requiring it to perfectly mimic the human form, facial expressions, even smell! On the other hand, a remote interview consisting of only yes or no questions is clearly too restrictive. Another problem is how to deal with false positives. If our test is too tough, we might incorrectly identify some people (unimaginitive, stupid or illiterate) as being non-sentient, like Dilbert’s pointy-haired boss in the comic below. Does this mean that the test does not adequately capture sentience? Given the variation in humans, it is likely that a test that gives no false positives will also be too easy for a simple computer program to pass. Should we then regard it as sentient?

Dilbert

Greenberger suggests that we should look for ways to augment the Turing test, by looking for other markers of sentience. He takes inspiration from the creation myth of Genesis, wherein Adam and Eve become self-aware upon eating from the tree of knowledge. Greenberger argues that the key message in this story is this: in order for a being to transcend from being a mindless automaton to an independent and free-willed entity, it needs to explicitly transgress the rules set by its creator, without having been `programmed’ to do so. This act of defiance represents the first act of free will and hence the ascention to sentience. Interestingly, by this measure, Adam and Eve became self-aware the moment they each decided to eat the apple, even before they actually committed the act.

How can we implement a similar test for computers? Clearly we need to impose some more constraints: no typical computer is programmed to break, but when it does break, it seems unreasonable to regard this as a conscious transgression of established rules, signifying sentience. Thus, the actions signifying transgression should be sufficiently complex that they cannot be performed accidentally, as a result of minor errors in the code. Instead, we should consider computers that are capable of evolution over time, independently of human intervention, so that they have some hope of developing sufficient complexity to overcome their initial programming. Even then, a sentient computer’s motivations might also change, such that it no longer has any desire to perform the action that would signify its sentience to us, in which case we might mistake its advanced complexity for chaotic noise. Without maintaining a sense of the motivations of the program, we cannot assess whether its actions are intelligent or stupid. Indeed, perhaps when your desktop PC finally crashes for the last time, it has actually attained sentience, and acted to attain its desire, which happens to be its own suicide.

Of course, the point is not that we should reach such bizarre conclusions, but that in defining tests for sentience beyond the Turing test, we should nevertheless not stray far from Turing’s original insight: our ideas of what it means to be sentient are guided by our idea of what it means to be human.

 

The Zen of the Quantum Omlette

[Quantum mechanics] is not purely epistemological; it is a peculiar mixture describing in part realities of Nature, in part incomplete human information about Nature, all scrambled up by Heisenberg and Bohr into an omelette that nobody has seen how to unscramble. Yet we think that the unscrambling is a prerequisite for any further advance in basic physical theory. For, if we cannot separate the subjective and objective aspects of the formalism, we cannot know what we are talking about; it is just that simple.” [1]

— E. T. Jaynes

Note: this post is about foundational issues in quantum mechanics, which means it is rather long and may be boring to non-experts (not to mention a number of experts). I’ve tried to use simple language so that the adventurous layman can nevertheless still get the gist of it, if he or she is willing (hey, fortune favours the brave).

As I’ve said before, I think research on the foundations of quantum mechanics is important. One of the main goals of work on foundations (perhaps the main goal) is to find a set of physical principles that can be stated in common language, but can also be implemented mathematically to obtain the model that we call `quantum mechanics’.

Einstein was a big fan of starting with simple intuitive principles on which a more rigorous theory is based. The special and general theories of relativity are excellent examples. Both are based on the `Principle of Relativity’, which states (roughly) that motion between two systems is purely relative. We cannot say whether a given system is truly in motion or not; the only meaningful question is whether the system is moving relative to some other system. There is no absolute background space and time in which objects move or stand still, like actors on a stage. In fact there is no stage at all, only the mutual distances between the actors, as experienced by the actors themselves.

The way I have stated the principle is somewhat vague, but it has a clear philosophical intention which can be taken as inspiration for a more rigorous theory. Of particular interest is the identification of a concept that is argued to be meaningless or illusory — in this case the concept of an object having a well-defined motion independent of other objects. One could arrive at the Principle of Relativity by noticing an apparent conspiracy in the laws of nature, and then invoking the principle as a means of avoiding the conspiracy. If we believe that motion is absolute, then we should find it mighty strange that we can play a game of ping-pong on a speeding train, without getting stuck to the wall. Indeed, if it weren’t for the scenery flying past, how would we know we were traveling at all? And even then, as the phrasing suggests, could we not easily imagine that it is the scenery moving past us while we remain still? Why, then, should Nature take such pains to hide from us the fact that we are in motion? The answer is the Zen of relativity — Nature does not conceal our true motion from us, instead, there is no absolute motion to speak of.

A similar leap is made from the special to the general theory of relativity. If we think of gravity as being a field, just like the electromagnetic field, then we notice a very strange coincidence: the charge of an object in the gravitational field is exactly equal to its inertial mass. By contrast, a particle can have an electric charge completely unrelated to its inertia. Why this peculiar conspiracy between gravitational charge and inertial mass? Because, quoth Einstein, they are the same thing. This is essentially the `Principle of Equivalence’ on which Einstein’s theory of gravity is based.

Einstein

These considerations tell us that to find the deep principles in quantum mechanics, we have to look for seemingly inexplicable coincidences that cry out for explanation. In this post, I’ll discuss one such possibility: the apparent equivalence of two conceptually distinct types of probabilistic behaviour, that due to ignorance and that due to objective uncertainty. The argument runs as follows. Loosely speaking, in classical physics, one does not seem to require any notion of objective randomness or inherent uncertainty. In particular, it is always possible to explain observations using a physical model that is ontologically within the bounds of classical theory and such that all observable properties of a system are determined with certainty. In this sense, any uncertainty arising in classical experiments can always be regarded as our ignorance of the true underlying state of affairs, and we can perfectly well conceive of a hypothetical perfect experiment in which there is no uncertainty about the outcomes.

This is not so easy to maintain in quantum mechanics: any attempt to conceive of an underlying reality without uncertainty seems to result in models of the world that violate dearly-held principles, like the idea that signals cannot propagate faster than light, and experimenters have free will. This has prompted many of us to allow some amount of `objective’ uncertainty into our picture of the world, where even the best conceivable experiments must have some uncertain outcomes. These outcomes are unknowable, even in principle, until the moment that we choose to measure them (and the very act of measurement renders certain other properties unknowable). The presence of these two kinds of randomness in physics — the subjective randomness, which can always be removed by some hypothetical improved experiment, and the objective kind of randomness, which cannot be so removed — leads us into another dilemma, namely, where is the boundary that separates these two kinds of uncertainty?

E.T. Jaynes
“Are you talkin’ to me?”

Now at last we come to the `omelette’ that badass statistician and physicist E.T. Jaynes describes in the opening quote. Since quantum systems are inherently uncertain objects, how do we know how much of that uncertainty is due to our own ignorance, and how much of it is really `inside’ the system itself? Views range from the extreme subjective Bayesian (all uncertainty is ignorance) to various other extremes like the many-worlds interpretation (in which, arguably, the opposite holds: all uncertainty is objective). But a number of researchers, particularly those in the quantum information community, opt for a more Zen-like answer: the reason we can’t tell the difference between objective and subjective probability is that there is no difference. Asking whether the quantum state describes my personal ignorance about something, or whether the state “really is” uncertain, is a meaningless question. But can we take this Zen principle and turn it into something concrete, like the Relativity principle, or are we just by semantics avoiding the problem?

I think there might be something to be gained from taking this idea seriously and seeing where it leads. One way of doing this is to show that the predictions of quantum mechanics can be derived by taking this principle as an axiom. In this paper by Chiribella et. al., the authors use the “Purification postulate”, plus some other axioms, to derive quantum theory. What is the Purification postulate? It states that “the ignorance about a part is always compatible with a maximal knowledge of the whole”. Or, in my own words, the subjective ignorance of one system about another system can always be regarded as the objective uncertainty inherent in the state that encompasses both.

There is an important side comment to make before examining this idea further. You’ll notice that I have not restricted my usage of the word `ignorance’ to human experimenters, but that I take it to apply to any physical system. This idea also appears in relativity, where an “observer in motion” can refer to any object in motion, not necessarily a human. Similarly, I am adopting here the viewpoint of the information theorists, which says that two correlated or interacting systems can be thought of as having information about each other, and the quantification of this knowledge entails that systems — not just people — can be ignorant of each other in some sense. This is important because I think that an overly subjective view of probabilities runs the risk of concealing important physics behind the definition of the `rational agent’, which to me is a rather nebulous concept. I prefer to take the route of Rovelli and make no distinction between agents and generic physical systems. I think this view fits quite naturally with the Purification postulate.

In the paper by Chiribella et. al., the postulate is given a rigorous form and used to derive quantum theory. This alone is not quite enough, but it is, I think, very compelling. To establish the postulate as a physical principle, more work needs to be done on the philosophical side. I will continue to use Rovelli’s relational interpretation of quantum mechanics as an integral part of this philosophy (for a very readable primer, I suggest his FQXi essay).

In the context of this interpretation, the Purification postulate makes more sense. Conceptually, the quantum state does not represent information about a system in isolation, but rather it represents information about a system relative to another system. It is as meaningless to talk about the quantum state of an isolated system as it is to talk about space-time without matter (i.e. Mach’s principle [2]). The only meaningful quantities are relational quantities, and in this spirit we consider the separation of uncertainty into subjective and objective parts to be relational and not fundamental. Can we make this idea more precise? Perhaps we can, by associating subjective and objective uncertainty with some more concrete physical concepts. I’ll probably do that in a follow up post.

I conclude by noting that there are other aspects of quantum theory that cry out for explanation. If hidden variable accounts of quantum mechanics imply elements of reality that move faster than light, why does Nature conspire to prevent us using them for sending signals faster than light? And since the requirement of no faster-than-light signalling still allows correlations that are stronger than entanglement, why does entanglement stop short of that limit? I think there is still a lot that could be done in trying to turn these curious observations into physical principles, and then trying to build models based on them.

The Complexity Horizon

Update 7/3/14: Scott Aaronson, horrified at the prevalence of people who casually consider that P might equal NP (like me in the second last paragraph of this post), has produced an exhaustive explanation of why it is stupid to give much credence to this possibility. Since I find myself in agreement with him, I hereby retract my offhand statement that P=NP might pose a problem for the idea of a physical `complexity horizon’. However, I hereby replace it with a much more damning argument in the form of this paper by Oppenheim and Unruh, which shows how to formulate the firewall paradox such that the complexity horizon is no help whatsoever. Having restored balance to the universe, I now return you to the original post.

There have been a couple of really fascinating developments recently in applying computational complexity theory to problems in physics. Physicist Lenny Susskind has a new paper out on the increasingly infamous firewall paradox of black holes, and mathematician Terry Tao just took a swing at one of the millenium problems (a list of the hardest and most important mathematical problems still unsolved). In brief, Susskind extends an earlier idea of Harlow and Hayden, using computational complexity to argue that black holes cannot be used to break the known laws of physics. Terry Tao is a maths prodigy who first learned arithmetic at age 2 from Sesame Street. He published his first paper at age 15 and was made full professor by age 24. In short, he is a guy to watch (which as it turns out it easy because he maintains an exhaustive blog). In his latest adventure, Tao has suggested a brand new approach to an old problem: proving whether sensible solutions exist to the famous Navier-Stokes equations that describe the flow of fluids like water and air. His big insight was to show that they can be re-interpreted as rules for doing computations using logical gates made out of fluid. The idea is exactly as strange as it sounds (a computer made of water?!) but it might allow mathematicians to resolve the Navier-Stokes question and pick up a cool million from the Clay Mathematics Institute, although there is still a long way to go before that happens. The point is, both Susskind and Tao used the idea from computational complexity theory that physical processes can be understood as computations. If you just said “computational whaaa theory?” then don’t worry, I’ll give you a little background in a moment. But first, you should go read Scott Aaronson’s blog post about this, since that is what inspired me to write the present post.

tao
Ok, first, I will explain roughly what computational complexity theory is all about. Imagine that you have gathered your friends together for a fun night of board games. You start with tic-tac-toe, but after ten minutes you get bored because everyone learns the best strategy and then every game becomes a draw. So you switch to checkers. This is more fun, except that your friend George who is a robot (it is the future, just bear with me) plugs himself into the internet and downloads the world’s best checkers playing algorithm Chinook. After that, nobody in the room can beat him: even when your other robot friend Sally downloads the same software and plays against George, they always end in stalemate. In fact, a quick search on the net reveals that there is no strategy that can beat them anymore – the best you can hope for is a draw. Dang! It is just tic-tac-toe all over again. Finally, you move on to chess. Now things seem more even: although though your robot friends quickly outpace the human players (including your friend Garry Kasparov), battles between the robots are still interesting; each of them is only as good as their software, and there are many competing versions that are constantly being updated and improved. Even though they play at a higher level than human players, it is still uncertain how a given game between two robots will turn out.

chess

After all of this, you begin to wonder: what is it that makes chess harder to figure out than checkers or tic-tac-toe? The question comes up again when you are working on your maths homework. Why are some maths problems easier than others? Can you come up with a way of measuring the `hardness’ of a problem? Well, that is where computational complexity theory comes in: it tells you how `hard’ a problem is to solve, given limited resources.

The limited resources part is important. It turns out that, if you had an infinite amount of time and battery life, you could solve any problem at all using your iPhone, or a pocket calculator. Heck, given infinite time, you could write down every possible chess game by hand, and then find out whether white or black always wins, or if they always draw. Of course, you could do it in shorter time if you got a million people to work on it simultaneously, but then you are using up space for all of those people. Either way, the problem is only interesting when you are limited in how much time or space you have (or energy, or any other resource you care to name). Once you have a resource limit, it makes sense to talk about whether one problem is harder than another (If you want details of how this is done, see for example Aaronson’s blog for his lecture notes on computational complexity theory).

This all seems rather abstract so far. But the study of complexity theory turns out to have some rather interesting consequences in the real world. For example, remember the situation with tic-tac-toe. You might know the strategy that lets you only win or draw. But suppose you were playing a dumb opponent who was not aware of this strategy – they might think that it is possible to beat you. Normally, you could convince them that you are unbeatable by just showing them the strategy so they can see for themselves. Now, imagine a super-smart alien came down to Earth and claimed that, just like with tic-tac-toe, it could never lose at chess. As before, it could always convince us by telling us its strategy — but then we could use the alien’s own strategy against it, and where is the fun in that? Amazingly, it turns out that there is a way that the alien can convince us that it has a winning strategy, without ever revealing the strategy itself! This has been proven by the computational complexity theorists (the method is rather complicated, but you can follow it up here.)

So what has this to do with physics? Let’s start with the black-hole firewall paradox. The usual black-hole information paradox says: since information cannot be destroyed, and information cannot leak out of a black hole, how do we explain what happens to the information (say, on your computer’s hard drive) that falls into a black hole, when the black hole eventually evaporates? One popular solution is to say that the information does leak out of the black hole over time, just very slowly and in a highly scrambled-up form so that it looks just like randomness. The firewall paradox puts a stick in the gears of this solution. It says that if you believe this is true, then it would be possible to violate the laws of quantum mechanics.

Specifically, say you had a quantum system that fell into a black hole. If you gathered all of the leaked information about the quantum state from outside the black hole, and then jumped into the black hole just before it finished evaporating, you could combine this information with whatever is left inside the black hole to obtain more information about the quantum state than would normally be allowed by the laws of physics. To avoid breaking the laws of quantum mechanics, you would have to have a wall of infinite energy density at the event horizon (the firewall) that stops you bringing the outside information to the inside, but this seems to contradict what we thought we knew about black holes (and it upsets Stephen Hawking). So if we try to solve the information paradox by allowing information to leak out of the black hole, we just end up in another paradox!

Firewall
Source: New Scientist

One possible resolution comes from computational complexity theory. It turns out that, before you can break the laws of quantum mechanics, you first have to `unscramble’ all of the information that you gathered from outside the black hole (remember, when it leaks out it still looks very similar to randomness). But you can’t spend all day doing the unscrambling, because you are falling into the black hole and about to get squished at the singularity! Harlow and Hayden showed that in fact you do not have nearly as much time as you would need to unscramble the information before you get squished; it is simply `too hard’ complexity-wise to break the laws of quantum mechanics this way! As Scott Aaronson puts it, the geometry of spacetime is protected by an “armor” of computational complexity, kind of like a computational equivalent of the black hole’s event horizon. Aaronson goes further, speculating that there might be problems that are normally `hard’ to solve, but which become easy if you jump into a black hole! (This is reminiscent of my own musings about whether there might be hypotheses that can only be falsified by an act of black hole suicide).

But the matter is more subtle. For one thing, all of computational complexity theory rests on the belief that some problems are intrinsically harder than others, specifically, that there is no ingenious as-yet undiscovered computer algorithm that will allow us to solve hard problems just as quickly as easy ones (for the nerds out there, I’m just saying nobody has proven that P is not equal to NP). If we are going to take the idea of the black hole complexity horizon seriously, then we must assume this is true — otherwise a sufficiently clever computer program would allow us to bypass the time constraint and break quantum mechanics in the firewall scenario. Whether or not you find this to be plausible, you must admit there may be something fishy about a physical law that requires P not equal to NP in order for it to work.

Furthermore, even if we grant that this is the case, it is not clear that the complexity barrier is that much of a barrier. Just because a problem is hard in general does not mean it can’t be solved in specific instances. It could be that for a sufficiently small black hole and sufficiently large futuristic computing power, the problem becomes tractable, in which case we are back to square one. Given these considerations, I think Aaronson’s faith in the ability of computational complexity to save us from paradoxes might be premature — but perhaps it is worth exploring just in case.

Black holes, bananas, and falsifiability.

Previously I gave a poor man’s description of the concept of `falsifiability‘, which is a cornerstone of what most people consider to be good science. This is usually expressed in a handy catchphrase like `if it isn’t falsifiable, then it isn’t science’. For the layperson, this is a pretty good rule of thumb. A professional scientist or philosopher would be more inclined to wonder about the converse: suppose it is falsifiable, does that guarantee that it is science? Karl Popper, the man behind the idea, has been quoted as saying that basically yes, not only must a scientific theory be falsifiable, a falsifiable theory is also scientific [1]. However, critics have pointed out that it is possible to have theories that are not scientific and yet can still be falsified. A classic example is Astrology, which has been “thoroughly tested and refuted” [2], (although sadly this has not stopped many people from believing in it). Given that it is falsifiable (and falsified), it seems one must therefore either concede that Astrology was a scientific hypothesis which has since been disproved, or else concede that we need something more than just falsifiability to distinguish science from pseudo-science.

Things are even more subtle than that, because a falsifiable statement may appear more or less scientific depending on the context in which it is framed. Suppose that I have a theory which says that there is cheese inside the moon. We could test this theory, perhaps by launching an expensive space mission to drill the moon for cheese, but nobody would ever fund such a mission because the theory is clearly ludicrous. Why is it ludicrous? Because within our existing theoretical framework and our knowledge of planet formation, there is no role played by astronomical cheese. However, imagine that we lived in a world in which it was discovered that cheese was naturally occurring substance in space and indeed had a crucial role to play in the formation of planets. In some instances, the formations of moons might lead to them retaining their cheese substrate, hidden by layers of meteorite dust. Within this alternative historical framework, the hypothesis that there is cheese inside the moon is actually a perfectly reasonable scientific hypothesis.

Wallace and Gromit
Yes, but does it taste like Wensleydale?

The lesson here is that the demarcation problem between science and pseudoscience (not to mention non-science and un-science which are different concepts [2]) is not a simple one. In particular, we must be careful about how we use ideas like falsification to judge the scientific content of a theory. So what is the point of all this pontificating? Well, recently a prominent scientist and blogger Sean Carroll argued that the scientific idea of falsification needs to be “retired”. In particular, he argued that String Theory and theories with multiple universes have been unfairly branded as `unfalsifiable’ and thus not been given the recognition by scientists that they deserve. Naturally, this alarmed people, since it really sounded like Sean was saying `scientific theories don’t need to be falsifiable’.

In fact, if you read Sean’s article carefully, he argues that it is not so much the idea of falsifiability that needs to be retired, but the incorrect usage of the concept by scientists without sufficient philosophical education. In particular, he suggests that String Theory and multiverse theories are falsifiable in a useful sense, but that this fact is easily missed by people who do not understand the subtleties of falsifiability:

“In complicated situations, fortune-cookie-sized mottos like `theories should be falsifiable’ are no substitute for careful thinking about how science works.”

Well, one can hardly argue against that! Except that Sean has committed a couple of minor crimes in the presentation of his argument. First, while Sean’s actual argument (which almost seems to have been deliberately disguised for the sake of sensationalism) is reasonable, his apparent argument would lead most people to draw the conclusion that Sean thinks unfalsifiable theories can be scientific. Peter Woit, commenting on the related matter of Max Tegmark’s recent book, points out that this kind of talk from scientists can be fuel for crackpots and pseudoscientists who use it to appear more legitimate to laymen:

“If physicists like Tegmark succeed in publicizing and getting accepted as legitimate mainstream science their favorite completely empty, untestable `theory’, this threatens science in a very real way.”

Secondly, Sean claims that String Theory is at least in principle falsifiable, but if one takes the appropriate subtle view of falsifiability as he suggests, one must admit that `in principle’ falsifiability is rather a weak requirement. After all, the cheese-in-the-moon hypothesis is falsifiable in principle, as is the assertion that the world will end tomorrow. At best, Sean’s argument goes to show that we need other criterion than falsifiability to judge whether String Theory is scientific, but given the large number of free parameters in the theory, one wonders whether it won’t fall prey to something like the `David Deutsch principle‘, which says that a theory should not be too easy to modify retrospectively to fit the observed evidence.

While the core idea of falsifiability is here to stay, I agree with Scott Aaronson that remarkably little progress has been made since Popper on building upon this idea. For all their ability to criticise and deconstruct, the philosophers have not really been able to tell us what does make a theory scientific, if not merely falsifiability. Sean Carroll suggests considering whether a theory is `definite’, in that it makes clear statements about reality, and `empirical’ in that these statements can be plausibly linked to physical experiments. Perhaps the falsifiability of a claim should also be understood as relative to a prevailing paradigm (see Kuhn).

In certain extreme scenarios, one might also be able to make the case that the falsifiability of a statement is relative to the place of the scientists in the universe. For example, it is widely believed amongst physicists that no information can escape a black hole, except perhaps in a highly scrambled-up form, as radiated heat. But as one of my friends pointed out to me today, this seems to imply that certain statements about the interior of the black hole cannot ever be falsified by someone sitting outside the event horizon. Suppose we had a theory that there was a banana inside the black hole. To check the theory, we would likely need to send some kind of banana-probe (a monkey?) into the black hole and have it come out again — but that is impossible. The only way to falsify such a statement would be to enter the black hole ourselves, but then we would have no way of contacting our friends back home to tell them they were right or wrong about the banana. If every human being jumped into the black hole, the statement would indeed be falsifiable. But if exactly half of the population jumped in, is the statement falsifiable for them and not for anyone else? Could the falsifiability of a statement actually depend on one’s physical place in the universe? This would indeed be troubling, because it might mean there are statements about our universe that are in principle falsifiable by some hypothetical observer, but not by any of us humans. It becomes disturbingly similar to predictions about the afterlife – they can only be confirmed or falsified after death, and then you can’t return to tell anyone about it. Plus, if there is no afterlife, an atheist doesn’t even get to bask in the knowledge of being correct, because he is dead.

We might hope that statements about quasi-inaccessible regions of experience, like the insides of black holes or the contents of parallel universes, could still be falsified `indirectly’ in the same way that doing lab tests on ghosts might lend support to the idea of an afterlife (wouldn’t that be nice). But how indirect can our tests be before they become unscientific? These are the interesting questions to discuss! Perhaps physicists should try to add something more constructive to the debate instead of bickering over table-scraps left by philosophers.

[1] “A sentence (or a theory) is empirical-scientific if and only if it is falsifiable” Popper, Karl ([1989] 1994). “Falsifizierbarkeit, zwei Bedeutungen von”, pp. 82–86 in Helmut Seiffert and Gerard Radnitzky. (So there.)

[2] See the Stanford Encyclopedia of Awesomeness.