Surgical Robots Learn to Operate Without Human Help

Two studies in Science Robotics show robots performing gallbladder removal and laparoscopic tasks autonomously, trained by watching surgeon videos.

Robots are learning to perform surgery by watching videos of experienced surgeons. Two recent studies published in Science Robotics demonstrate autonomous systems completing complex procedures, from gallbladder removal to multiple laparoscopic tasks, without human intervention.

The Johns Hopkins Gallbladder Robot

A robot called SRT-H performed gallbladder removal on lifelike patient models with 100% accuracy. The procedure involved 17 complex tasks including identifying ducts and arteries, placing clips, and cutting tissue with scissors.

The robot learned through imitation. Researchers led by Axel Krieger at Johns Hopkins showed the system videos of surgeons performing gallbladder procedures on pig cadavers. Using a machine learning architecture similar to ChatGPT, SRT-H extracted the surgical steps and learned to replicate them.

Performance was comparable to expert surgeons, though the robot took longer. It handled unexpected challenges, including anatomical variations and blood-like dyes that altered visual appearance during operations.

Generalized Surgical Intelligence

A separate team from Chinese University of Hong Kong, Cornerstone Robotics, and Johns Hopkins tackled a harder problem: building a system that generalizes across different surgical tasks rather than mastering just one.

Their approach combines visual parsing using depth estimation and image segmentation, reinforcement learning, and a technique called zero-shot transfer that moves skills from simulation to real surgery.

Testing included seven game-based skill tasks on the da Vinci Research Kit, five surgical assistive tasks with the Sentire system on ex vivo animal tissue, and three tasks validated in live-animal trials. The system handled variations in scene layout, object sizes, instrument types, and lighting conditions.

What This Means

These are still research systems, not clinical tools. The Johns Hopkins robot worked on lifelike models, not human patients. The generalized system required live-animal trials to validate simulation-to-reality transfer.

But the trajectory matters. Previous autonomous surgery research focused on specific, constrained tasks. These studies show robots learning general surgical skills from video demonstrations, then adapting to novel situations.

The practical implications cut two ways. Autonomous surgical robots could extend expert-level surgery to places without enough surgeons, standardize procedures, and reduce human fatigue. They could also displace surgical work, raise liability questions, and introduce new failure modes.

The Fine Print

Neither system has regulatory approval for human surgery. The path from research demonstration to FDA clearance is long and uncertain.

Current surgical robots like Intuitive’s da Vinci system are teleoperated: surgeons control every movement through a console. These autonomous systems represent a different category that regulators have not yet addressed.

The Johns Hopkins research was funded by ARPA-H, the federal agency focused on health breakthroughs, along with NIH and NSF. The generalized surgical intelligence work involved collaboration between academic labs and Cornerstone Robotics, a company developing surgical systems.

Timing estimates vary. Some researchers suggest partial autonomy for specific surgical subtasks within five years. Full autonomy for complex procedures remains further out, limited less by technical capability than by safety validation and regulatory frameworks.