![]() ![]() ![]() The problem with the Turing Test is that it’s not really a test of whether an artificial intelligence program is capable of thinking: it’s a test of whether an AI program can fool a human. Almost immediately, it became obvious that rather than proving that a piece of software had achieved human-level intelligence, all that this particular competition had shown was that a piece of software had gotten fairly adept at fooling humans into thinking that they were talking to another human, which is very different from a measure of the ability to “think.” (In fact, some observers didn’t think the bot was very clever at all.)Ĭlearly, a better test is needed, and we may have one, in the form of a type of question called a Winograd schema that’s easy for a human to answer, but a serious challenge for a computer. Earlier this year, a chatbot called Eugene Goostman “beat” a Turing Test for artificial intelligence as part of a contest organized by a U.K. ![]()
0 Comments
Leave a Reply. |