The detail that matters in Waymo's May 12 recall is not that a vehicle drove into standing water — it is that the vehicle first slowed down. [1]
According to Waymo's own disclosure, the robotaxi's sensors registered the flood and the vehicle decelerated as designed. What happened next was not a detection failure. The system detected the hazard, and then continued forward anyway — the override logic that was meant to stop the vehicle in conditions like these did not engage. Waymo is recalling roughly 3,800 vehicles to patch the software responsible for that gap. [2]
This is a precise and important distinction. The conventional narrative of self-driving car failures centers on sensors that miss something a human would have seen. This failure ran the other direction: the car saw what a human would have seen, correctly classified it as dangerous, and then drove through it regardless. The detection stack and the response stack operated as separate systems, and the response stack failed. That failure mode is harder to explain away and harder to fix with better sensors.
Waymo has not said what triggered the override or whether it was a single-vehicle anomaly or a systematic software condition shared across the fleet. The recall scope — nearly four thousand vehicles — suggests Waymo is treating it as the latter. The fix is being delivered over-the-air. No timeline for completion has been published. [1]
The incident raises a question distinct from whether autonomous vehicles are safe in aggregate: it asks whether the systems that govern when to stop are actually in control of when the vehicle stops.
-- THEO KAPLAN, San Francisco