Artificial Intelligence and What Computers Still Don’t Understand

Hector Levesque thinks his computer is stupid—and that yours is, too. Siri and Google’s voice searches may be able to understand canned sentences like “What movies are showing near me at seven o’clock?,” but what about questions—“Can an alligator run the hundred-metre hurdles?”—that nobody has heard before? Any ordinary adult can figure that one out. (No. Alligators can’t hurdle.) But if you type the question into Google, you get information about Florida Gators track and field. Other search engines, like Wolfram Alpha, can’t answer the question, either. Watson, the computer system that won “Jeopardy!,” likely wouldn’t do much better.

In a terrific paper just presented at the premier international conference on artificial intelligence, Levesque, a University of Toronto computer scientist who studies these questions, has taken just about everyone in the field of A.I. to task. He argues that his colleagues have forgotten about the “intelligence” part of artificial intelligence.

Levesque starts with a critique of Alan Turing’s famous “Turing test,” in which a human, through a question-and-answer session, tries to distinguish machines from people. You’d think that if a machine could pass the test, we could safely conclude that the machine was intelligent. But Levesque argues that the Turing test is almost meaningless, because it is far too easy to game. Every year, a number of machines compete in the challenge for real, seeking something called the Loebner Prize. But the winners aren’t genuinely intelligent; instead, they tend to be more like parlor tricks, and they’re almost inherently deceitful. If a person asks a machine “How tall are you?” and the machine wants to win the Turing test, it has no choice but to confabulate. It has turned out, in fact, that the winners tend to use bluster and misdirection far more than anything approximating true intelligence. One program worked by pretending to be paranoid; others have done well by tossing off one-liners that distract interlocutors. The fakery involved in most efforts at beating the Turing test is emblematic: the real mission of A.I. ought to be building intelligence, not building software that is specifically tuned toward fixing some sort of arbitrary test.

To try and get the field back on track, Levesque is encouraging artificial-intelligence researchers to consider a different test that is much harder to game, building on work he did with Leora Morgenstern and Ernest Davis (a collaborator of mine). Together, they have created a set of challenges called the Winograd Schemas, named for Terry Winograd, a pioneering artificial-intelligence researcher at Stanford. In the early nineteen-seventies, Winograd asked what it would take to build a machine that could answer a question like this:

The town councillors refused to give the angry demonstrators a permit because they feared violence. Who feared violence?

a) The town councillors

b) The angry demonstrators

More here Artificial Intelligence and What Computers Still Don’t Understand

This entry was posted in Technology. Bookmark the permalink.