Researchers at UC Berkeley and UCSF have released Pillar-0, an open-source AI model that can identify over 350 clinical conditions from a single CT or MRI scan—and it outperforms competing models from Google, Microsoft, and Alibaba on diagnostic accuracy.
What Makes Pillar-0 Different
Most medical imaging AI analyzes scans slice by slice, treating 3D volumes as stacks of 2D images. Pillar-0 takes a fundamentally different approach: it processes entire 3D volumes directly using a novel architecture called Atlas.
The results are dramatic. Atlas is over 150 times faster than traditional vision transformers at processing an abdomen CT, according to first author Kumar Krishna Agrawal, a Ph.D. candidate at Berkeley. This isn’t just an engineering improvement—it enables the model to capture spatial relationships across an entire scan that slice-by-slice analysis misses.
The model works across multiple imaging types: chest CT, abdomen CT, brain CT, and breast MRI. From a single scan, it can flag hundreds of potential findings simultaneously.
How It Performs
On a benchmark of 350+ clinical findings, Pillar-0 achieved an area under the curve (AUC) of 0.87. For context, here’s how the competition stacks up:
- Pillar-0: 0.87 AUC
- Google’s MedGemma: 0.76 AUC
- Microsoft’s MI2: 0.75 AUC
- Alibaba’s Lingshu: 0.70 AUC
The model also improved upon Sybil-1, an existing lung cancer prediction tool, by 7% when validated on external data from Massachusetts General Hospital.
“We’re releasing everything,” said Adam Yala, assistant professor at UC Berkeley and UCSF and the study’s senior author. The complete codebase, trained models, evaluation pipelines, and documentation are publicly available.
What This Means
Open-source medical AI has lagged behind proprietary systems from tech giants. Pillar-0 changes that equation. Any hospital, research institution, or startup can now build on a state-of-the-art radiology foundation model without licensing fees or data-sharing agreements.
The practical implications are significant. A model that processes full 3D volumes 150x faster than alternatives could enable real-time analysis during procedures, or allow smaller institutions to deploy advanced diagnostics without massive computational infrastructure.
Dr. Maggie Chung, an assistant professor in radiology at UCSF who co-developed the evaluation framework, emphasized that the model was designed with clinical deployment in mind—not just benchmark performance.
The Fine Print
Pillar-0 is a foundation model, not an FDA-cleared diagnostic device. It requires fine-tuning for specific clinical tasks before deployment, though the researchers note it needs minimal data for this adaptation.
The model was trained on institutional data from Berkeley and UCSF, which may not fully represent patient populations at other hospitals. External validation across diverse healthcare settings will be needed before widespread clinical adoption.
Still, the open release means the broader research community can now stress-test, improve, and build on this work—something proprietary models don’t allow. That transparency may ultimately matter more than any single benchmark score.