Danny Greenberger on AI

Physicist Danny Greenberger — perhaps best known for his classic work with Horne and Zeilinger in which they introduced the “GHZ” state to quantum mechanics — has a whimsical and provocative post over at the Vienna Quantum Cafe about creation myths and Artificial Intelligence.

The theme of creation is appropriate, since the contribution marks the debut of the Vienna blog, an initiative of the Institute of Quantum Optics and Quantum Information (incidentally, my current place of employment). Apart from drumming up some press for them, I wanted to elaborate on some of Greenberger’s interesting and dare I say outrageous ideas about what is means for a computer to think, and what it has to do with mankind’s biblical fall from grace.

For me, the core of Greenberger’s post is the observation that the Turing Test for artificial intelligence may not be as meaningful as we would like. Alan Turing, who basically founded the theory of computing, proposed the test in an attempt to pin down what it means for a computer to become `sentient’. The problem is, the definition of sentience and intelligence is already vague and controversial in living organisms, so it seems hopeless to find such a definition for a computer that everyone could agree upon. Turing’s ingenious solution was not to ask whether a computer is sentient in some objective way, but whether it could fool a human into thinking that it is also human; for example, by having a conversation over e-mail. Thus, a computer can be said to be sentient if, in a given setting, it is indistinguishable from a human for all practical purposes. The Turing test thereby takes a metaphysical problem and turns it into an operational one.

Turing’s Test is not without its own limitations and ambiguities. What situation is most appropriate for comparing a computer to a human? On one hand, a face-to-face interaction seems too demanding on the computer, requiring it to perfectly mimic the human form, facial expressions, even smell! On the other hand, a remote interview consisting of only yes or no questions is clearly too restrictive. Another problem is how to deal with false positives. If our test is too tough, we might incorrectly identify some people (unimaginitive, stupid or illiterate) as being non-sentient, like Dilbert’s pointy-haired boss in the comic below. Does this mean that the test does not adequately capture sentience? Given the variation in humans, it is likely that a test that gives no false positives will also be too easy for a simple computer program to pass. Should we then regard it as sentient?

Dilbert

Greenberger suggests that we should look for ways to augment the Turing test, by looking for other markers of sentience. He takes inspiration from the creation myth of Genesis, wherein Adam and Eve become self-aware upon eating from the tree of knowledge. Greenberger argues that the key message in this story is this: in order for a being to transcend from being a mindless automaton to an independent and free-willed entity, it needs to explicitly transgress the rules set by its creator, without having been `programmed’ to do so. This act of defiance represents the first act of free will and hence the ascention to sentience. Interestingly, by this measure, Adam and Eve became self-aware the moment they each decided to eat the apple, even before they actually committed the act.

How can we implement a similar test for computers? Clearly we need to impose some more constraints: no typical computer is programmed to break, but when it does break, it seems unreasonable to regard this as a conscious transgression of established rules, signifying sentience. Thus, the actions signifying transgression should be sufficiently complex that they cannot be performed accidentally, as a result of minor errors in the code. Instead, we should consider computers that are capable of evolution over time, independently of human intervention, so that they have some hope of developing sufficient complexity to overcome their initial programming. Even then, a sentient computer’s motivations might also change, such that it no longer has any desire to perform the action that would signify its sentience to us, in which case we might mistake its advanced complexity for chaotic noise. Without maintaining a sense of the motivations of the program, we cannot assess whether its actions are intelligent or stupid. Indeed, perhaps when your desktop PC finally crashes for the last time, it has actually attained sentience, and acted to attain its desire, which happens to be its own suicide.

Of course, the point is not that we should reach such bizarre conclusions, but that in defining tests for sentience beyond the Turing test, we should nevertheless not stray far from Turing’s original insight: our ideas of what it means to be sentient are guided by our idea of what it means to be human.

 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s