- Related Stories
Photo galleries: Robots in actionMarch 16, 2007
This is your brain on a microchipMay 11, 2006
Blueprinting the human brainMay 10, 2006
The best of times in science and techApril 3, 2006
Stanford wins $2 million in robotic car raceOctober 9, 2005
FAQ: Keeping pace with robotsOctober 5, 2005
Ray Kurzweil deciphers a brave new worldSeptember 29, 2005
Your brain: Search engine, or calculator?June 29, 2005
IBM computing algorithm thinks like an animalMarch 22, 2005
(continued from previous page)
Could you elaborate on that--on nonmonotonic reasoning?
McCarthy: OK. In ordinary logical deduction, if you, say, have a sentence P that is deducible from a collection of sentences--call it A--and we have another collection of sentences B, which includes all the sentences of A, then it will still be deducible from B because the same proof will work. However, humans do reasoning in which that is not the case. Suppose I said, "Yes, I will be home at 11 o'clock, but I won't be able to take your call." Then the first part, "I will be home at 11 o'clock,"--you would conclude that I could take your call, but then if I added the "but" phrase, then you would not draw that conclusion.
So nonmonotonic reasoning is where you draw a conclusion, which may be a correct conclusion to draw, but it isn't guaranteed to be true because some added facts may prevent it. Now, that was around 1980, or a little bit before, that formalizing nonmonotonic reasoning began, and it's turned into a fairly big field now.
What would be the biggest achievements in the last 50 years? Or how much of the original goals were accomplished?
McCarthy: Well, we don't have human-level intelligence. However, I would say driving the car 128 miles shows a considerable advance. (Editors' note: In last fall's DARPA Grand Challenge, the winning vehicle--Stanford's robotic car, "Stanley"--drove itself 131.6 miles across the Mojave Desert.)
What's the next big thing, then, to accomplish?
McCarthy: I would like to see further progress in formalizing commonsense knowledge and reasoning, taking context into account. That's something I've been working on for a long time and that some other people also work on, and which DARPA supports, but I think the ideas that are available are not sufficient to reach human-level intelligence.
A goal in AI is not so much to make machines be like humans, having human intellectual capabilities, but to have the equivalent of human intellectual capabilities, correct? In other words, not reinventing the human but creating something that thinks similar to humans and surpasses human thought?
McCarthy: That's the way I see the problem. There are certainly other people who are interested in simulating human intelligence, even aspects of it that are not optimal. In particular, Allen Newell and Herbert Simon tended to look at it that way.
Another sort of high-level goal that may or may not be reachable seems to be to try to program originality into machine thinking.
McCarthy: Yes. That would be worth some efforts. I did something that was so to speak part way to that in 1963, in which I talked about a creative solution to a problem, a solution that involved elements that were not in the problem, the statement itself. But that was just a start.
And originality--is that as simple as trying to introduce some randomness into the programs, or was it a different order of magnitude?
McCarthy: Well, in principle, in a logical system, you could generate sentences systematically or randomly...and any idea would eventually turn up, but the "eventually" is likely to be extremely far in the future. So that hasn't done much, either using randomness or otherwise. What's needed is to figure out good ways of constructing new ideas from old ones.
Going back for a second to the notion of having machine capability versus programming and the right source of ideas--today we have so much more computational capability than was available 50 years ago. What difference is that making, with the state of the art of computer chips and memory these days?
McCarthy: I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn't the real problem.
The real problem still being the basic ideas?
How do robots factor into thinking about artificial intelligence? I guess in the popular vision (in movie images of humanoid robots), that's where people would tend to see human-level intelligence, but are robots a real factor, or does it really matter what shape or form the machine takes?
McCarthy: Certainly, robots present some problems. That is, they have to operate in an environment, and some of the even rather elementary problems have not been solved yet--that is, combining the ability to walk the way a human walks, which is falling forward rather than just shuffling, and with the ability to understand a three-dimensional scene and so forth. These ideas have been worked on sort of separately, but there still isn't a robot that could move around confidently in a cluttered room and climb stairs, let alone climb trees.
33 commentsJoin the conversation! Add your comment