Forbes magazine reports that Waymo, the Google self driving car spin off, will buy a fleet of vehicles to equip with its suite of sensors, computers, and controls to produce self driving cars. In this blogger’s mind, that article, in describing the sensor package in detail, buried the lede: instances where Waymo human vehicle ‘drivers’ had to take command of their self driving cars plummeted by a factor of seven in a little over a year.
Disengagement frequency is a key measure of self driving car success, tracked by the state of California. A disengagement occurs when a self driving car cedes control to its human shepherd; self driving car manufactures (with vehicles on the road in California) report total mileage operated and the number of disengagements. With these two numbers, division gives the mean miles between disengagements, where higher is better.
Here’s where things get fuzzy: disengagements are a matter of interpretation. Google only discloses driver-initiated disengagements it believes to be warranted (based on their interpretation of California rules).
Waymo in the Lead
Waymo vehicle reliability, as reported by Forbes, blows the other manufacturers out of the water.
Between September 2014 and December 2015, Waymo cars made it, on average, 5318 miles between disengagements while Delphi cars averaged 42 miles, and Nissan only 14 miles. Waymo reliability over than period improved from the prior period by a factor of almost seven.
At the high-level, disengagement frequency doesn’t differentiate by cause or severity. For example, a human operator override due to the self driving car being overly cautious would be lumped in with an override due to the car failing to react to a hazard. California DMV data shows that the most common disengagement incurred by Waymo vehicles, occurring 35% of the time, was due to a “perception discrepancy” defined as “a situation in which the [self driving car]’s sensors are not correctly perceiving an object (e.g., perceiving overhanging branches as an obstacle)”.
On the Learning Curve
I have long wondered how a self driving car will respond to stop signs. Specifically, when arriving at a fixed stop sign, the driver is free to proceed once she has verified that she has the right of way and it is safe to enter the intersection. However, when she approaches a stop sign held by a flagger, she is to stop as long as the stop sign is displayed, regardless of whether there are oncoming cars. A self driving car must both perceive the stop sign as well as understand the context in which its displayed.
We can infer a steep learning curve for self driving cars and their developers. Relatively poor disengagement performance by Delphi and Nissan isn’t necessarily a death knell. To the extent that they can learn from their disengagements, they may be able to recover.
How do Waymo vehicles compare to human drivers? This is a tough one. Keep in mind that disengagements include scenarios that could lead to a collision. People do not necessarily notice (or if they do, they may deny) their near misses, whereas Waymo vehicles and their drivers do. It’s natural, and potentially dangerous, to overestimate ones driving skills; when asked who in a room is an above average driver, most hands would go up. Human drivers do not have a bona fide co-drivers (backseat drivers, maybe) who could takeover in an unsafe situation in the same way that Waymo cars do. All told, we could piece together collision and traffic violation data to get an estimate of human driver capability, but this would no doubt be an overstatement.