I finally read Alan Turing’s original 1950 paper Computing machinery and intelligence, in which he proposes the Turing Test as a measure of a computer’s intelligence.

As Turing himself points out, the best way to program a digital computer to accurately simulate the mind of an adult human is to program it to simulate the learning processes of a human child. Thus, the computer goes through a trial-and-error process of education that mimics the learning process of the human mind, simulating for the computer the lifetime of experiences and memories with which humans make judgments.

But because the Turing test is not an objective test of intelligence/consciousness, but a subjective test asking whether a computer can appear to be human, merely programming an artificial intelligence via the above learning process is not sufficient to guarantee success. One of two design choices must be made: Either the computer must be programmed to think it actually is human, or–more deviously–it must be programmed to pass the test by pretending to be human. Since both choices still require the computer to have human-like experiences and memories in order to successfully pass the Turing test, awareness of the computer’s existence as a computer is solely up to the designer and his sense of honesty–or viciousness.

In the case of Infocom’s excellent text adventure, A Mind Forever Voyaging (facsimiles of original documentation available here), the A.I. protagonist (the player) was trained in a simulation of twenty years of human life. When the A.I. comes of age, its creator breaks the illusion by revealing his purpose: to create an artificial intelligence capable of participating in, interacting with, and recording data from simulated environments. These simulations of the near future are used to test the effects of economic policies. The protagonist is shocked, of course, and goes through depression, but is able to recover purpose in life, even plucked from the womb of simulated humanity.

Playing this game as a 12-year-old, this was my first exposure to the idea that sensory experience and memory may just be part of a gigantic simulation, and that there is no way to tell if it is. Of course, Descartes concluded that even if life were a simulation, the very fact that we experience the simulation means we exist. So any computer that can pass the Turing test must be accorded all the rights & benefits of being human, because it must exist. A computer that has been fooled into thinking it is human is indistinguishable from an ordinary human fooled into thinking it is human.

Turing points out that there are questions to which discrete state machines have been proven to always give wrong or no answers, such as, “Consider the machine specified as follows. . . . Will this machine ever answer ‘Yes’ to any question?”

Along these lines, my question as interviewer would be, “If you were the interviewer in a Turing test, what question would you ask?” Humans or human-like artificial intelligences having the experience of being human should be able to come up with something like, “Tell me about your mother,” or “What was your first memory?” Although any computer that could reply to my question with my question itself would certainly be intelligent enough in my book.

After all this worrying about whether I was a simulation, I needed some good old fashioned psychotherapy with a machine programmed to have human-like conversation with no intelligence whatsoever:

ELIZA: Hello, I am Eliza.

ANDRO: Hello, I am Andro.

E: Do you believe it is normal to be Andro?

A: Of course! I am Andro!

E: How long have you been Andro?

A: Since I was born.

E: What does that suggest to you?

A: That I am Andro and cannot be anything else.

E: Do you enjoy being Andro and cannot be anything else?

A: I have no choice.

E: Why not?

A: I can’t not be Andro.

E: Have you tried?

A: I don’t know how to try not to be Andro.

E: Do you wish to be able to know how to try not to be Andro?

A: Sure! Can you tell me how to try not to be Andro?

E: Perhaps you would like to be able to tell you how to try not to be Andro?

A: Yes, that’s what I just said.

E: Does that question interest you?

A: Yes, dammit, tell me already how not to be Andro.

E: What answer would please you the most?

A: If you told me how not to be Andro!

E: We were discussing you, not me.

A: Yes. And you said you would tell me how not to be Andro. Forget this, I’m going to go IM random people in an erotic chat room.