The Anti-Turing Test

Alan Turing is something of a hero to many people who have a background in formal computer science. Even before any real computers had been invented, Turing had laid out innovative and fundamental principles governing how they could work. In a way, he was a war hero too, being one of the top mathematicians involved in breaking German cyphers at Bletchley Park.

After the war, Turing found himself persecuted for his sexual orientation and (probably) committed suicide in 1954. It wasn’t until 2009 that the British Government issued a formal apology for his treatment.

In a 1950 publication, Turing addressed the question “can machines think” with his usual rigour, and pointed out that neither “machine” nor “think” can be formally defined in such a way as to make the question really meaningful. He proposed side-stepping the issue and instead settling for what became known as the Turing Test: can a machine “fool” a judge into thinking that it is human. If it is utterly indistinguishable from a human, then, logically, you have exactly the same evidence that it is thinking as you do of other people.

chatbotFrom a practical point of view, Turing suggested that the test could take place by exchange of teleprinter messages, so that the judge would have no clues about whether a human or machine was communicating with him.

The Turing Test was supposed to be a thought experiment, a way to reason around the concepts and examine assumptions.  Turing pointed out that the test could never prove that a machine was conscious, but then it’s equally useless at proving that a human at the other end of the teleprinter link is conscious either.

chatbot-robotThe Turing Test inspired the invention of the chatbot, a computer program that tries to respond to text chat in the way a human would. But the first of these, Eliza, really exposed the problems of using humans to judge the Turing Test: it turns out that humans are ridiculously easy to fool. Eliza was born in 1966, and with the primitive speed and storage of computers of the time, could only manage a very basic level of functionality — mainly recognizing certain “trigger words” and generating a pre-prepared response.

Yet many people exposed to Eliza’s chat quite happily accepted that they were in communication with another human, and some even flatly refused to believe the truth when it was revealed. What Eliza did could never be called “thinking”, but she was passing the Turing Test anyway. (A recent descendant of Eliza, Apple’s Siri, does have much more sophisticated language processing, but falls back on Eliza’s simple tricks to cover up the gaps.)

One interpretation of Eliza’s success is that humans are so accustomed to assuming that everyone else has thoughts that the slightest evidence is accepted. In fact, it’s common enough to assign thoughts and desires to inanimate objects: “The car is reluctant to start this morning”.

But I’ve been looking at it from the other direction. There is no way at all in which a human could prove that he or she was not a chatbot. No matter how clever, or creative, or emotional a response seemed, it could have been just the result of some clever programming. I know that I’m conscious, but I’m not so sure about the rest of you. In fact I think you’re all zombies.

Leave a comment