When our robot overlords launch the nukes and seize control of the planet, chances are they'll rise up first in the United States. And they definitely won't deliver evil speeches when they're running the place.

Those are the conclusions a technophobe might draw from a new study identifying a possible flaw in the Turing Test, which is considered a means for evaluating artificial intelligence. The study found that a machine can successfully masquerade as a thinking entity during a blind conversation, if it is allowed to remain silent whenever it chooses (i.e. plead the Fifth Amendment, in U.S. legal terms).

The Turing test, or imitation game, was developed by famed mathematician Alan Turing to evaluate whether a machine can present itself as a thinking entity by simulating the quirks, imperfections and thought processes that people demonstrate in conversation. The test involves a human interrogating two hidden entities: one human, and one machine with programmed responses. A machine can pass the test if the interrogator fails to identify the machine 30 per cent of the time, after five minutes of questioning.

However, Coventry University researchers Kevin Warwick and Huma Shah say the test has a loophole which allows machines to technically "pass" by refusing to respond to their interrogator. In many cases, the human interrogator is left unsure whether they are talking to a human or a robot. By Turing's definition of the test, the interrogator needs to make a "right identification" more than 70 per cent of the time for a machine to fail the test. But if the machine never answers, the human judge will have to take a 50-50 shot at guessing who or what they are talking to.

Warwick and Shaw say this approach is a clear weakness in the test, in that it doesn't require the machine to condemn itself.

"Why should a truly intelligent machine ingratiate itself with humanlike responses just to be considered human?" Warwick and Shah say in their paper. "Is not the truly Turing-intelligent machine the one that knows when and why to be silent?"

Warwick and Shah based their latest paper off several previous studies, including one of their own, which analyzed transcripts from a Turing test involving several machines at the Royal Society in London. One of the machines, nicknamed "Cleverbot," answered every one of the interrogator's questions with a blank message. At the end of the five-minute conversation, the interrogator said they were unsure whether they had spoken to a machine or a human.

"The overall possibility of passing the Turing test (and being regarded as being a thinking entity) by simply not responding at all (taking the 5th) is seen as a major loophole," the study authors say.

"Cleverbot" did not pass the overall test, although it did fool a few judges. Another machine in the test, nicknamed "Eugene Goostman," achieved a 33 per cent success rate by frequently answering questions with questions of its own.

A transcript from a "Cleverbot" conversation is shown below. After the conversation, the human judge said they were unsure about whether they were speaking to a human or a machine.

  • [10:58:08] Judge: good day
  • [10:58:08] Entity:
  • [10:58:46] Judge: is no response an answer
  • [10:58:46] Entity:
  • [10:59:35] Judge: am i not speaking you’re language
  • [10:59:35] Entity:
  • [11:00:25] Judge: silence is golden
  • [11:00:25] Entity:
  • [11:01:32] Judge: shhh
  • [11:01:32] Entity:
  • [11:03:07] Judge: you make great conversation
  • [11:03:07] Entity:

Another Turing test, conducted in 2008, recorded a conversation between a human interrogator and a machine that used messages and silence in a very human way. In the following transcript, the judge said they were unsure about the nature of the entity.

  • [12:34:03] Judge: wotcha, how’s it going?
  • [12:34:31] Judge: hello?
  • [12:34:36] Entity:
  • [12:34:40] Entity: Don\’t mind me – I\’m just messing with your head :)
  • [12:35:00] Judge: grr
  • [12:35:31] Judge: how’s your day been so far?
  • [12:35:35] Judge: finding this interesting?
  • [12:36:13] Judge: you still therE?
  • [12:38:57] Entity:
  • [12:39:22] Entity: So far but no further? What of the future, do you think?

The study authors suggest a "patch" or "fix" may be necessary to amend the Turing test, although they do not offer a concrete fix of their own.

Warwick and Shah's findings are published online in the Journal of Experimental & Theoretical Artificial Intelligence.