February 17, 2007 8:02 AM PST

Driver's ed for robot racers

A correction was made to this story. Read below for details.
reporter's notebook PALO ALTO, Calif.--Driving downtown in any city can be an exercise in frustration, thanks to heavy traffic, confusing signage, potholes or boneheaded drivers all crammed into a tight space.

But imagine how quickly that frustration could turn to fear if some of those drivers were robots running on software built to guide unmanned cars through city streets. Navigating such a complex environment is largely uncharted territory for artificial intelligence, let alone big steel.

"It's a little bit scary to think about (our robotic car among) other human drivers or other really large vehicles," Mike Montemerlo, senior research engineer at Stanford University's Artificial Intelligence Lab, said here Thursday.

Montemerlo is one of the lead scientists developing Stanford's newest robotic contestant in DARPA's Urban Challenge, a road race of artificial intelligence set for November. The contest is a follow-up to the defense group's 2005 desert race, the Grand Challenge. Stanford's "Stanley," a robotic Volkswagen Touareg, won the $2 million Grand Challenge in a milestone of AI in the 21st century. Finishing fastest, in under seven hours, Stanley was one of only five vehicles to complete the 132-mile Nevada desert course in 2005; the previous year, all of the competitors failed entirely.

Photos of Stanford's Junior

Now, Stanford's AI team has built Stanley's successor, named "Junior" (after Stanford University founder Leland Stanford Jr.), a modified 2006 Volkswagen Passat wagon in a bright, German-manufactured blue. Montemerlo and team gave CNET News.com a preview of Junior and its technology at Stanford before the annual conference of the American Association for the Advancement of Science, where Stanford's Sebastian Thrun, AI director and head of the Stanford Racing Team, will give Junior its public unveiling.

Junior is still in the development phase, but the robot is already far ahead of its parent in terms of technology. (Stanford Racing Team plans to begin its testing phase in March.) Junior has to be smarter if it is to meet the stiff challenge of navigating city streets alongside other vehicles, including other robotic contestants and human-driven cars from DARPA.

In the desert race, Stanley only had to process terrain in front of it, like rocks or bumpy roads, because it wasn't driving among other robots. But in this race, Junior must be aware of fast-moving objects all around it, including its robotic rivals; and it must understand street signs, traffic lights and other basic rules of the road even when other robots are breaking those rules. As Thrun puts it, "The current challenge is to move from just sensing the environment to understanding the environment."

As a result, Junior must have much more sophisticated sensors that can "see" the world in a 360-degree view and process that data in as close to real time as possible. The Junior prototype, for example, has a new, high-definition lidar detection system by Velodyne, which spins around to give the robot an omnidirectional view of its surroundings. It also has a Point Grey Ladybug 2 video system, with six video cameras to capture near high-def video in all directions.

As opposed to Stanley, which built a 3D model of the world over time, Junior will attempt to use its more sophisticated sensors to create a picture in real time. The speed of response is crucial in a city setting.

Junior's software also must include new decision-making and predictive abilities that Stanley didn't possess. For one, Junior will need to be able to identify objects and make decisions based on that information. For example, if Junior were to encounter a curb, it would need to swerve around it to avoid a collision. But it wouldn't want to swerve in order to pass another robot if it meant crossing a double line because that would be breaking the rules of the road.

For that reason, Junior has new software components that deal with perception and decision making. One algorithm the AI lab has developed is for object tracking, which helps the robot understand when it sees a bike, car, curb, road markings or other moving objects. The algorithm will classify objects--e.g., that is a car moving 10 mph--and run that through a planning tool that can match the data to rules of the road in order to make a decision about how to proceed.

See more CNET content tagged:
Stanford, robot, Volkswagen, Artificial Intelligence, vehicle


Join the conversation!
Add your comment
You wanna know how to really reduce the number of fatalities on the roads?? Doesn't take a rocket scientist (or roboticist) to know the answer - seatbelts.
Posted by demner (62 comments )
Reply Link Flag
Seatbelts, cellphones, and active communications
Most people do wear seatbelts. What's more, it only affects the saftey of the person who chooses not to. Joe may die from not wearing a seatbelt, but he's not more likely to cause an accident. From my daily experience, drivers on cellphones are much more likely to cause an accident. I can't wait until they get robotic vehicles!

Also, the future of automated highways will probably have cars with systems like those described in the article, but I also see them having communicating with the vehicles around them. Your car may be able to notice another car changing lanes and react, but it's better if the two cars communicate and cooperatively plan their movements. You'll always need the automated system to deal with cars that can't communicate and unexpected circumstances, but communication will greatly expand safety.
Posted by Philo\ (10 comments )
Link Flag

Join the conversation

Add your comment

The posting of advertisements, profanity, or personal attacks is prohibited. Click here to review our Terms of Use.

What's Hot



RSS Feeds

Add headlines from CNET News to your homepage or feedreader.