Coming Soon: The Idiot-Proof, Thinking Car
This isn't science fiction we're talking about here. MIT’s Robotic Mobility Group, right here in Our Fair City of Cambridge, has developed and tested such a system. It has had major automaker partners (including Ford), and is talking with Daimler, BMW, Volvo, Toyota, Nissan and Mitsubishi.
Sterling Anderson, an MIT Ph.D. candidate at the lab, told me the system could be on cars in five years. Produced in volume, he said, this level of sophisticated crash avoidance (incorporating radar, cameras, gyros and accelerometers) would cost a new car buyer $5,000 to $10,000. For that reason, it’s likely to be on luxury cars at first.
The MIT system has been tested in real-world situations at the Ford test track in Dearborn, using a borrowed Jaguar S-Type and an unmanned Kawasaki mule. Anderson sat in the passenger seat with a laptop while a Ford test driver steered around cones and other obstacles. “In more than 800 trials, we experienced zero collisions or losses of control,” Anderson said. An unmanned version of the system developed with Quantum Signal (which provided the platform for tele-operational control) has been tested using an heavily modified Kawasaki mule. Here’s what the tests look like, on video:
What’s called semi-autonomous safety systems aren’t totally new. I visited Volvo’s test track in Sweden a decade or more ago, and they demonstrated a system that braked automatically when it sensed road obstacles ahead. I drove straight at an inflated balloon in a car shape, and the Volvo took over at the last minute, hitting the brakes sharply. I still hit the balloon, but not that hard.
Many cars today have adaptive cruise control, which will adjust your speed in relation to the car ahead of you. That, too, was initially only in luxury cars (I first tested it in a Jaguar), but soon reached a more mainstream audience. I note that Mazda has developed something called Smart City Brake Support (SCBS), and Volvo now has City Safety. Mazda’s SCBS uses lasers to detect objects or stopped vehicles ahead, and—assuming the car isn’t traveling too fast—bring it to a complete stop without the driver taking any action. Both automakers’ systems are intended for low-speed, stop-and-go driving.
Anderson assured me that MIT’s system is much more sophisticated than what he called its “one-dimensional” precedents. “There are a number of lower-level, myopic driver assistance systems developed in recent years, some of which will alert you when they detect an obstacle or stray from your lane,” he told me. “Our system addresses the broader issue of keeping you safe, incorporating all those earlier advances. It will analyze the nature of the problem, and if it can find a safe path to steer around the obstacle, it will take that action; if not, it will apply the brakes.”
It’s a thinking system, in other words, that “incorporates the idea of a threat.” I like that. It’s tactical. It’s like an always alert guard who doesn’t require coffee breaks. It’s actively monitoring the environment and adjusting the car’s response, sometimes minutely, to ensure that the driver stays within the collision-free safety region. So it might simultaneously be noting a child on the sidewalk, the escape hatch of an empty lane next to yours, and the icy conditions that could cause the car to slip. If you go into a corner too fast, it’s on the job there, too.
The system’s prime directive is 1) to avoid collisions; and 2) to honor the driver’s intentions. But, remember, the driver could well be an idiot! That’s why we have so many stupid accidents. And even smart people are notably distracted by texting and cellphone calls. Note the priorities—the driver’s intentions are secondary.
But people are pretty good at threat assessment. For instance, we can probably tell, in a micro-second, if that’s an empty box in the road, not presenting much of a threat, or one full of concrete blocks—representing imminent danger. “You need significant image processing to be able to tell the difference between those two things, and we haven’t bitten that off yet,” Anderson told me. Instead, the system will see the box, assume it’s dangerous and plan a safe route around it—or hit those brakes.
Anderson says the team is working on some kind of visual or physical signal that will alert the driver—maybe by vibrating the steering wheel a bit—when the system has taken an action. That’s useful, because otherwise drivers may get an exalted sense of their own competence. You need to know it when you’ve made a mistake.
Let’s face it, machines do a lot of stuff for us. The eminent author E.M. Forster wrote “The Machine Stops,” a prescient short story, in 1909. It posits a world in which we humans do practically nothing for ourselves, thanks to the manifold wonders of “The Machine.” When it stops, people don’t know what to do. Sounds a lot like the Internet and TV, doesn’t it? Read the story here, or if that's too much work, here’s a 1966 British television adaption of the story: