What The Turing Test Tells Us About The Future Of Technology

How do we know if a machine is capable of thought?
Two older women checking if an Alexa could pass the Turing test

The age of talking robots is here. Amazon Alexas and Google Assistants are popping up in homes, and most smartphones come with a voice assistant that can help perform tasks. We know that these devices are far from the advanced artificial intelligence dreamed up in movies like Her or Blade Runner, yet we inch closer and closer to realistic talking robots each year. What we don’t know is where to draw the line between an AI that talks to you like a robot and one that talks to you like a human. There’s a clever tool devised to find that line, though: the Turing test.

The Turing test is an experiment meant to determine a machine’s intelligence. It’s no longer taken very seriously among artificial intelligence researchers, but it remains a cultural staple as it raises fascinating questions about the nature of consciousness not just for robots but for humans, too.

What Is The Turing Test?

In 1950, English computer scientist Alan Turing wrote a paper discussing how machines “think.” The possibility of machines thinking was and continues to be a hot topic in the field of artificial intelligence. But in this paper, Turing makes the argument that the question as to whether computers can “think” is a waste of time. For one, humans struggle to define what “thinking” is and how we do it. It may seem obvious that a human thinks, but what about a dolphin? Or a dog? Or an ant? Or an atom? Trying to answer these questions can get you so bogged down in the definition of words that it’s really no help when you’re simply trying to make a computer perform tasks.

Instead of dealing with the question of thinking, Turing suggests something new: the imitation game (which is also the name of a 2014 biopic about him). The concept is pretty simple, and involves two people and a machine. One of the people is the interrogator, and talks to both the computer and the human, without knowing which is which. The interrogator then has to guess which was the human and which was the computer.

This end goal of the process — which later became known as the Turing test to honor its creator — is to see how often humans guess correctly who is who. If the interrogator consistently guesses correctly, then the computer fails the test. If it’s close to 50/50, that implies that the interrogators’ choices were no better than random guesses, and the computer passes.

Has Anything Passed The Turing Test?

In 1990, American inventor Hugh Loebner set up a yearly contest that awards any computers that can pass an extensive version of the Turing test. The Gold and Silver Loebner prizes have, so far, never been won. There’s a bronze prize given out each year to the best of the entries, but we have yet to see something that can consistently trick humans into thinking it’s a human.

The Test’s Shortcomings

It’s been over 70 years since the Turing test was first proposed, and in that time it’s lost some of its initial glow. There are a number of criticisms of the test, some of which bring up important points about the nature of artificial intelligence.

The Test Doesn’t Prove Consciousness

One of the biggest criticisms of the Turing test is that it doesn’t do a good job of replacing the initial question as to whether robots are capable of thought. In 1999, American philosopher John Searle made this point most strongly in his thought experiment called the Chinese Room Argument.

In this thought experiment, a person with no knowledge of any Chinese language is brought into a room with boxes filled with Chinese characters. Along with these boxes is a book that has instructions on what to do with these symbols. On the outside of the room, native Chinese speakers ask questions, which are then fed into the room. Using the instructions, the person in the room can formulate an output, all the while not understanding anything that’s really going on. The people outside the room will think the person inside it must know fluent Chinese.

This thought experiment is a fascinating one, but it has garnered criticisms of its own. For one, Turing already addressed this line of thinking in his initial paper about the imitation game. He argued that getting bogged down in whether a computer can “understand” language is useless. And really, who is to say that the human brain isn’t a room filled with symbols and instructions that takes in inputs and spits out an output without truly understanding it? 

Human Intelligence Isn’t All Intelligence

The bigger problem with the Turing test is that it focuses on a very specific kind of human intelligence. It all comes down to the machine’s ability to handle human language, which is one of the most difficult tasks you could possibly give a computer.

There’s a quote often attributed to Albert Einstein, though he was definitely not the one who said it: “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” It’s the kind of saying you usually find on inspiration Instagram pages, but it does have a point about the way humans evaluate intelligence. Already, computers are able to perform calculations that would take humans thousands of years. 

The Turing test does evaluate one area of machine intelligence, but the ability to imitate humans is not the only thing worth looking at. Most artificial intelligence doesn’t concern itself with this at all. If you consider a GPS, there is not a single one on the market today that can talk to you like another human. But there’s also not a human who is capable of instantaneously finding you the best driving route from any location to any other location.

Humans Are Easily Tricked

The last criticism for the Turing test is actually a criticism of humans: we’re pretty fallible. No computer has passed the Turing test yet, but if it does, it’s likely to be a result of trickery instead of sheer intelligence.

The best example of this is the computer system ELIZA, which was built in 1966 by Joseph Weizenbaum. ELIZA is a simple chatbot that has canned responses to certain keywords. Most importantly, ELIZA is able to take parts of the user’s messages and insert it into its own chat responses. If you give ELIZA your name, for example, it can determine that it’s your name and call you that. 

Perhaps the most important part of turning ELIZA into a convincing human, however, was that she was built to be a therapist — a Rogerian therapist, specifically. This meant that it would prompt you to talk about your life, and then use your responses to formulate more questions to ask. You might say “I’m having trouble with my husband,” and ELIZA could respond, “Tell me more about your husband.” 

While this may sound simple, it did convince some people that ELIZA was a real human being listening and responding to them. It still wouldn’t pass the Turing test, because the humans were misled in the first place by being told that Eliza was a human. It’s also relevant that ELIZA appeared in the 1960s, when people were less likely to believe a computer was capable of this kind of communication. The important takeaway is that people spent hours talking to this robot.

In the decades since ELIZA, many computer scientists have followed this program’s lead. The key to making a computer that can speak to humans isn’t to make one that fully imitates them, but instead is very good at a smaller number of tasks. The Amazon Alexa doesn’t have the full range of linguistic ability, but any time you deviate from something it knows, it will say something like “Sorry, I don’t know that one.” While a computer that can communicate exactly like a human would be an impressive feat, it’s not the only feat that’s being pursued.

Does The Turing Test Still Have Value?

When Alan Turing first proposed the imitation game, the field of computer science looked very different than it does today. In its origins, many computer scientists thought the ultimate goal of artificial intelligence was to create a thinking machine that can simulate human intelligence. But as we’ve seen, that goal limits the vast number of other tasks that a computer can do. A computer does not need to trick a human into thinking it’s human for it to be useful. There are billions of human intelligences walking around the planet; there’s really no need for one more.

The Turing test, then, is a product of its time, and its use is limited. If there is a day when artificial intelligence is able to trick humans, that will mark a huge accomplishment in human engineering. But already, computers are able to beat humans at chess, Jeopardy! and any number of other tasks. Machines could outsmart us long before they start speaking English.

The legacy of the Turing test, then, is mostly philosophical. What would it mean for a computer to imitate a human so seamlessly we can’t tell the difference? As usual, scientists often charge ahead, often ignoring the philosophical ramifications of what progress might bring. There’s an oft-quoted part of Jurassic Park in which a mathematician tells the creator of the live dinosaur theme park, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.” In the case of artificial intelligence, the warning inherent in this quote seems to be unheeded.

This isn’t to say that all work on artificial intelligence is necessarily dangerous — though famed astrophysicist Stephen Hawking has said just that — but that there are questions worth asking. The worst time to start wondering “Is it a good idea to create an intelligence that is able to perfectly mimic humans?” is after a machine has already passed the Turing test.

Learn a new language today.
Try Babbel