A School District Tried to Help Train Waymo's Robots to Stop for School Buses. It Didn't Work.

Austin ISD offered real-world training data. Three software fixes later, the robotaxis still can't reliably stop when children are boarding.

Yellow school bus on road during daytime

Waymo’s autonomous vehicles have repeatedly failed a fundamental safety test: stopping for school buses when children are boarding. Despite months of collaboration with Austin’s school district, three software updates, and a federal recall, the problem persists.

The Austin Independent School District has documented at least 24 incidents where Waymo robotaxis illegally passed stopped school buses with flashing lights and extended stop arms. That’s not an edge case. That’s a pattern.

The Training That Didn’t Take

In mid-December, AISD took an unusual step. The district hosted a half-day data collection event, gathering school buses and stop-arm signals from across its fleet so Waymo could expose its vehicles to these scenarios in a controlled environment.

It was a direct response to the growing violation count. The district essentially offered free training data to help the AI system learn something that human drivers figure out on day one of driver’s ed.

By mid-January, over a month after the training session, at least four more violations had occurred. The software update Waymo issued in December hadn’t fixed the problem. Neither did a subsequent NHTSA-mandated recall on December 10.

The Human Error That Wasn’t Human

One of the most revealing incidents happened on January 12, 2026. A Waymo vehicle stopped correctly for a school bus displaying stop signals. Then it called home.

The robotaxi sent a query to a human remote assistance operator in Novi, Michigan: “Is this a school bus with active signals?”

The operator responded “No.”

The vehicle proceeded to pass the bus while students were still boarding.

Waymo’s system worked exactly as designed. It asked for human help when uncertain. The human gave the wrong answer. The children remained in danger.

This single incident encapsulates the problem with treating humans as the failsafe for AI systems. The whole point of remote assistance is to handle situations the AI can’t. When the human backup fails at the same task the AI fails at, the safety net has holes.

Why Simple Rules Are Hard

Stopping for school buses isn’t complicated. Flashing red lights, extended stop arm, you stop. Every state has this law. Every human driver learns it.

But Phil Koopman, a Carnegie Mellon professor who studies autonomous vehicle safety, points out the real issue: this represents a fundamental failure of Waymo’s safety validation process.

Three separate attempts to fix the problem - original code, first software update, NHTSA recall - have all failed. If the company can’t reliably solve “stop for the yellow bus with flashing lights,” what else is being missed?

Koopman notes that Waymo recently shifted toward end-to-end machine learning for its driving software. The millions of miles the previous system logged may not predict how the current version behaves. New architecture means new failure modes.

The District’s Request, Denied

After the violations continued, Austin ISD made a reasonable request: cease robotaxi operations near schools during bus hours until the system reliably stops for buses.

Waymo declined.

The company maintains that its vehicles are safer than human drivers overall, citing statistics about pedestrian injuries. But those aggregate numbers don’t help when a specific, predictable scenario keeps failing.

“We take safety seriously,” goes the standard corporate response. But actions matter more than statements. Continuing operations in school zones while your vehicles blow through stopped school buses tells parents and districts what the actual priority is.

Three Federal Investigations

The federal government has opened three separate investigations into Waymo within a three-month period:

  • NHTSA issued the December 10 recall
  • The agency opened a formal probe after continued incident reports
  • The National Transportation Safety Board is investigating the Austin violations specifically

NTSB’s preliminary findings were blunt: Waymo’s vehicles acted “illegally” around school buses. The safety board is examining more than 20 incidents in Austin alone.

A final NTSB report is expected in 12 to 24 months. Meanwhile, the robotaxis keep operating.

What This Means for Autonomous Vehicles

The school bus problem reveals a fundamental limitation of machine learning systems. Waymo has logged billions of miles. Its vehicles perform impressively in many scenarios. But “impressive in aggregate” and “reliable in specific safety-critical situations” are different standards.

The Austin school district tried to help. They provided exactly what AI systems supposedly need - real-world training data for a specific scenario. It wasn’t enough.

This suggests that simply logging more miles isn’t the solution. The autonomous vehicle industry may need fundamentally new approaches to ensure AI systems can reliably handle safety-critical situations they haven’t explicitly been programmed for.

Or at minimum, they need to stop operating in scenarios where their systems demonstrably fail.

What You Can Do

If you live in an area with robotaxi operations:

  • Assume autonomous vehicles may not stop for school buses, crossing guards, or other non-standard situations
  • Report incidents to local authorities and the company
  • Contact your school district to ask what policies they have regarding autonomous vehicles near schools
  • Support local regulations requiring autonomous vehicles to demonstrate competency in safety-critical scenarios before deployment

The technology isn’t ready for every situation. The companies deploying it need to acknowledge that, or regulators need to enforce it.