NASA's Perseverance Rover Just Completed the First AI-Planned Drive on Mars

JPL used Anthropic's Claude to plot a 456-meter route across Jezero Crater. It wrote the commands in Rover Markup Language, identified hazards with 98.4% accuracy, and cut planning time in half.

On December 8, 2025, NASA’s Perseverance rover started driving across Jezero Crater on Mars following a route it had never seen a human plan. The waypoints in its memory had been plotted by Anthropic’s Claude AI - the first time a large language model has planned a drive on another planet.

Over two sols (Martian days 1,707 and 1,709), Perseverance drove 689 feet on December 8 and 807 feet on December 10, covering roughly 456 meters total through rocky terrain. JPL engineers estimate the AI approach cuts route-planning time in half.

How Rover Driving Normally Works

Driving on Mars is nothing like driving on Earth. Radio signals take roughly 20 minutes to travel between Earth and Mars, which rules out anything resembling a joystick. Instead, JPL’s rover planners manually build “breadcrumb trails” - sequences of waypoints spaced no more than 330 feet apart, where the rover stops and waits for the next set of instructions.

The planners work from two data sources: high-resolution orbital imagery from the HiRISE camera aboard Mars Reconnaissance Orbiter, and terrain-slope data from digital elevation models. They identify hazards - boulder fields, sand ripples, steep slopes - and plot a safe path through them. It’s painstaking, technically demanding work, and it’s one of the biggest bottlenecks in rover operations.

The consequences of getting it wrong are real. In 2009, NASA’s Spirit rover drove into soft sand and got stuck. Despite months of rescue attempts, it never moved again.

What Claude Actually Did

JPL engineers used Claude Code - Anthropic’s programming agent - to hand off the waypoint-planning task. But they didn’t just throw the AI at the problem cold. They fed it years of accumulated operational data from Perseverance’s time on Mars since its February 2021 landing.

Given that context, Claude analyzed HiRISE orbital imagery using its vision capabilities to identify terrain features: bedrock, outcrops, boulder fields, sand ripples. It then built the route in 10-meter segments, stringing them into a continuous path. The model iterated on its own work - critiquing waypoint placement and suggesting revisions before finalizing the plan.

The output wasn’t a suggestion or a natural-language description. Claude wrote actual drive commands in Rover Markup Language (RML) - the XML-based programming language originally developed for NASA’s Mars Exploration Rover missions around 2004. It identified hazards with 98.4% accuracy.

The Verification Process

Nobody at JPL sent AI-generated commands to a $2.7 billion rover on another planet without checking them first.

The AI-generated waypoints went through Perseverance’s standard daily simulation process. Engineers at JPL’s Rover Operations Center modeled over 500,000 telemetry variables using a digital twin to verify the projected positions and predict hazards along the proposed route.

When they reviewed Claude’s plans, they found only minor changes were needed. At one point, ground-level camera images - which Claude hadn’t seen - revealed sand ripples on either side of a narrow corridor. The rover drivers split the route more precisely than Claude had at that section. Otherwise, the plan held up.

What This Means

This is one of the clearest demonstrations of AI doing something genuinely useful rather than generating marketing copy or chatbot responses. The task is well-defined, the stakes are high, the verification process is rigorous, and the efficiency gains are measurable.

Route planning is a real bottleneck. Every hour saved on planning is an hour that can go toward science - analyzing rock samples, studying dust devils, or investigating potential biosignatures at sites like “Cheyava Falls”. More efficient planning also means more frequent drives and greater total distance covered over the mission’s lifetime.

But the implications extend beyond Mars. JPL roboticist Vandi Verma pointed toward kilometer-scale autonomous drives as the goal - where AI handles the routine navigation while human operators focus on scientific decisions. Communication delays get worse the farther you go: signals to Jupiter’s moon Europa take 33 to 53 minutes one way. Saturn’s moon Titan is over an hour. At those distances, some degree of autonomous planning stops being a nice-to-have and becomes a necessity.

NASA has also suggested the approach could support Artemis lunar missions and eventual deep-space exploration.

What’s Worth Watching

A few things to keep an eye on:

The 98.4% hazard accuracy sounds impressive, but the 1.6% matters. On Earth, a 1.6% miss rate in terrain analysis is a rounding error. On Mars, it could mean another Spirit. JPL’s human verification layer caught the issues this time. The question is whether future missions will maintain that rigor as the technology becomes routine, or whether the pressure to move faster will erode oversight.

This was a controlled demonstration, not operational deployment. Engineers picked the terrain, provided extensive context, and reviewed everything. Scaling this to daily autonomous operations across unpredictable Martian terrain is a different challenge.

The model used orbital imagery it was given, not onboard sensor data. Claude didn’t have access to Perseverance’s ground-level cameras during planning. As the demonstration showed, ground-level views caught things orbital images missed. Any production system would need to integrate both data sources, likely requiring real-time or near-real-time model inference - a very different technical problem from batch analysis on Earth.

Still, this is a legitimate milestone. It’s AI doing hard, consequential work under proper supervision, with clear metrics and transparent limitations. That’s the version of AI progress worth paying attention to.