Last Sunday, when I was about to wrap up my Red Pills of the Week column, I received a Twitter notification that was both exciting & disturbing at the same time: The Turing test, that technological Rubicon dividing the line between mindless Roombas & German-accented Terminators, had finally been passed! The news read that a computer program designed by a team of Russians, had allegedly succeeded in convincing 33 percent of the judges in a test conducted at the Royal Society in London, that instead of a computer they were chatting with a 13-year-old boy from Ukraine named ‘Eugene’.
“Our main idea was that he can claim that he knows anything, but his age also makes it perfectly reasonable that he doesn’t know everything,” said Vladimir Veselov, one of the creators of the programme. “We spent a lot of time developing a character with a believable personality.”
But just as I was in a hurry to order some high-powered LED flashlights & a copy of Robopocalypse the next Monday –the book explains why flashlights would be essential to combat our sylicon-based overlords; also check out this video— all the online buzzing powered down faster than GlaDOS after taking a beating with a portal gun. The claim, it turns out, was no more real than the cake in Aperture laboratories.
Oh, well. There’s always the Zombie Apocalypse, right?
Nevertheless, all this online commotion & the readiness many people showed in accepting the news got me thinking: Why is it that we’re so obsessed with the Turing test? Why do we even think it would be a valid assessment of Artificial Intelligence?
It was in 1950 when British mathematician & computer scientist Alan Turing (1912-1954) published Computing Machinery and Intelligence in which he posed the question: Can a machine think? Turing answered in the affirmative, but in doing so he pointed out to a bigger conundrum –if a computer could think, how could we tell? Here’s where Turing proposed a solution: If a machine could establish a conversation with a person, and that person wouldn’t be able to tell the difference between the machine & a human being, then from this person’s point of view, the machine was capable of thinking.
We should also point out that there’s been several iterations of Turing’s test. The original version in fact, originated from the premise of a man & a woman sitting in different rooms, and a 3rd participant acting as the judge, whose job would involve determining the gender of the persons in the other rooms conversing with him through a computer; the trick in the test was that the woman would try to deceive the job in convincing him she was the man –and it doesn’t take a computer genius to realize that Turing’s concealed homosexuality, was quite likely the reason he chose deception as proof of intelligence.
I was considering these ideas this afternoon, while I was listening to the latest episode of the Skeptiko podcast, in which Alex Tsakiris interviewed Princeton neuroscientist Dr. Michael Graziano, author of the book Consciousness and the Social Brain. As you may probably suspect, Dr. Graziano is a hardcore materialist, and the theory he’s trying to elaborate seeks views human consciousness strictly from a biological viewpoint.
Alex Tsakiris: […]Okay, Dr. Graziano, tell us what’s necessary and sufficient to create consciousness. That would be like a first logic, rationalist kind of thing. What’s necessary and sufficient to create human consciousness?
Dr. Michael Graziano: Well one way to put it, and I have often used this example as it kind of nicely encapsulates our approach. And it is certainly totally different from the perspective that you outlined that I think a lot of people take. So here is an example – I had a friend who was a psychologist and he told me about a patient of his. And this patient had a delusion, he thought he had a squirrel in his head. And that’s a little odd, but people have odd delusions and it’s not that unusual. Anyway, he was certain of it and you could not convince him of it otherwise. He was fixed on this delusion and he knew it to be true. Now, you could tell him that’s illogical and he would say yeah, that’s okay, but there are things in the universe that transcend logic. You could not argue him out of it. So there were kind of two directions you could take in trying to explain this phenomenon. And would be to ask okay, how does his brain produce a squirrel? How did the neurons secrete the squirrel? Now, that would be a very unproductive approach. And another approach would be to say how does his brain construct that self-description? And how does it arrive at such certainty that the description is correct? And how does the brain not know that it’s a self-description? Now, those things you can get at from an objective point of view. You can answer those questions.
And in effect, I think you could replace the word ‘squirrel’ with the word ‘awareness’ and I think that the whole thing is exactly encapsulated. I think almost all approaches to consciousness take the first direction, how does the brain produce a squirrel – it doesn’t.
Herein lies the reason why modern Science has encumbered the Turing test: We should accept a deception from a computer as a sign of intelligence, because our own brains deceive us into thinking WE are conscious! I am a biological robot whose brain is tricking me to believe I’m Red Pill Junkie, and you are a biological robot tricked by your brain into believing a different identity. But it’s ALL an illusion as far as modern Neuroscience is concerned, and since computer scientists also assume the brain is nothing but a data processing system, this is the model they’re currently working on in order to achieve the Holy Grail of A.I.
But what if they are wrong? What if Alex & many of the researchers he’s interviewed on his podcast, are right in pointing out that Consciousness is the ultimate test for materialistic Science, precisely because of its incapacity to adequately quantify & measure consciousness? Ironic, considering how every intellectual achievement, including Science, originates from Mind –the one thing we cannot put into a microscope.
Dr. Graziano & other skeptics might accuse me of being an uncredentialed woo-woo trying to defend a magical belief system, and argue that even though Neuroscience hasn’t fully explained the emergence of consciousness in our brain, it doesn’t mean it won’t do so in the future. I would point those skeptics to the work of Jaron Lanier, a fellow who IMO knows a thing or two about computers –after all, he’s the one who coined the term ‘virtual reality’– and who is not only VERY skeptic of the Turing test’s efficacy to measure intelligence in an artificial system, but also shares my suspicions that human consciousness cannot be explained away purely from a mechanistic perspective:
But the Turing test cuts both ways. You can’t tell if a machine has gotten smarter or if you’ve just lowered your own standards of intelligence to such a degree that the machine seems smart. If you can have a conversation with a simulated person presented by an AI program, can you tell how far you’ve let your sense of personhood degrade in order to make the illusion work for you?
Which is *precisely* what happened with the judges testing the Eugene program. You see, it may be that in our rush to increase our expectations about artificial intelligence, we might have inadvertently lowered our expectations for teenage intelligence –the machines are not getting smarter, ’tis the meatbags who are getting dumber!
So fear not, fellow Coppertops, for even if tomorrow, a year or ten from now, we finally get the news that some geek managed to program a computer that could pass the legendary Turing test, I hardly doubt it would mean Skynet is about to wake up & purge the world from the human infestation.
…But the keep the flashlights handy, just in case.