Inside HHS's AI-Powered Ideological Screening of Federal Grants

The Department of Health and Human Services has deployed Palantir and Credal AI tools to flag grants for 'DEI' and 'gender ideology' since March 2025 - with a vaccine injury AI tool raising additional concerns

The Department of Health and Human Services has been quietly using artificial intelligence to screen federal grants for ideological compliance since March 2025. According to HHS’s recently published AI use case inventory, the agency deployed tools from Palantir and startup Credal AI to flag grants, grant applications, and job descriptions for anything perceived as aligned with “DEI” or “gender ideology.”

Neither Palantir nor HHS publicly announced this use of the company’s software. The revelation comes from routine disclosure requirements, not any transparency initiative.

How the Screening Works

Palantir’s Foundry platform - the same technology the company uses for intelligence agencies and Fortune 500 companies - now screens HHS documents for keywords and phrases that might violate two Trump executive orders targeting diversity initiatives.

The system doesn’t just flag obvious terms. According to reporting by Wired’s Caroline Haskins, the AI sifts through grant applications to identify patterns and language that might indicate noncompliance, even when explicit DEI terminology isn’t present.

Credal AI - a Y Combinator-backed startup whose founders previously worked at Palantir - provides the “guardrails” for this system. The company specializes in controlling what large language models can and cannot do, ensuring they operate within defined parameters. At HHS, Credal fine-tunes Palantir’s flagging capabilities.

The irony isn’t lost: a company built to prevent AI from going off the rails is helping the government screen scientific research for political acceptability.

The Executive Orders Behind the Screening

On January 20, 2025, President Trump signed Executive Order 14151, “Ending Radical and Wasteful Government DEI Programs and Preferencing.” It directed federal agencies to terminate DEI-related contracts and grants within 60 days, along with anything connected to “environmental justice.”

A second order, EO 14173, required grant recipients to certify they don’t operate programs “promoting DEI that violate any applicable Federal anti-discrimination laws” - without defining what that means.

The vagueness is the point. Without clear definitions, researchers and institutions must guess what language might trigger a flag. The chilling effect on grant applications is a feature, not a bug.

On February 21, 2025, a federal judge issued a preliminary injunction blocking enforcement of both executive orders. The court found the term “diversity, equity, and inclusion” as used in the orders was unconstitutionally vague and potentially violated First and Fifth Amendment protections.

Yet the AI screening continued.

In late December, the Trump administration agreed to pause anti-DEI criteria for stalled NIH research grants while legal challenges proceed. But this settlement only covers grant applications made through September 2025 - and doesn’t appear to address the AI systems still in place.

Palantir’s Expanding Government Footprint

HHS isn’t Palantir’s only federal customer. The company has secured more than $900 million in federal contracts this year alone, including a $30 million ICE contract to track undocumented immigrants and a nearly $1 billion Navy software deal.

Palantir’s U.S. government revenue spiked 66% in Q4 2025 compared to the year prior, reaching $570 million.

The company has long faced criticism for enabling surveillance. After Palantir took over the Pentagon’s Project Maven contract in 2019 - the AI drone project Google abandoned after employee protests - critics argued the company was willing to do work that other tech firms wouldn’t touch.

Investor Paul Graham accused Palantir of “building the infrastructure of the police state.” Civil liberties organizations warn that Palantir’s platforms can aggregate sensitive data from tax returns, employment records, immigration status, and family information, then layer AI on top to predict patterns and movements.

CEO Alex Karp defends the work as necessary for liberal democracies to function, while acknowledging the products could process data from legal surveillance. It’s a distinction that may not matter much to the researchers whose grants are being flagged.

The Vaccine AI Tool

HHS is also developing a separate AI system that raises different concerns. According to Wired’s reporting, the department is building a generative AI tool to find patterns across vaccine injury data.

The tool, in development since late 2023, would analyze reports from the Vaccine Adverse Event Reporting System (VAERS). Experts worry the predictions it generates could be used by HHS Secretary Robert F. Kennedy Jr. to advance an anti-vaccine agenda.

VAERS has a fundamental limitation: it doesn’t track how many people received vaccines, making adverse events appear more common than they actually are. LLMs are also prone to hallucination - generating confident-sounding but false information. Deploying such a system to guide vaccine policy decisions raises obvious risks.

Kennedy has already removed several shots from the recommended childhood vaccination schedule, including vaccines for COVID-19, influenza, hepatitis A and B, meningococcal disease, rotavirus, and RSV. He’s also dismissed members of the Advisory Commission on Childhood Vaccinations ahead of their terms.

An AI tool that generates vaccine injury “hypotheses” from incomplete data could provide scientific-sounding cover for further rollbacks.

What This Means

The HHS AI screening represents something new: automated ideological vetting of scientific research. The government has always made funding decisions based on perceived merit and alignment with priorities. But deploying Palantir’s surveillance infrastructure to flag grants for political language is a different kind of intrusion.

Researchers now face a choice: avoid certain topics and terminology entirely, or risk having their work flagged by an algorithm they can’t see or challenge. Self-censorship is hard to measure, but it’s the predictable outcome when AI systems patrol the boundaries of acceptable thought.

The legal challenges may eventually constrain these tools. Courts have already found the underlying executive orders constitutionally suspect. But the technology is in place, the screening is ongoing, and the precedent is set.

Whether the current administration or a future one, the infrastructure for AI-powered ideological screening of federal grants now exists. That’s not easily undone.

The Bottom Line

When the government deploys surveillance company AI to screen scientific grants for political compliance, we’ve crossed a line that matters - regardless of what one thinks about DEI policies. The question isn’t whether certain grants should be funded. It’s whether algorithmic screening for ideological purity belongs anywhere near scientific research.