A pharmaceutical manufacturer used AI agents to write its drug specifications, manufacturing procedures, and production records. Nobody reviewed the output. When FDA inspectors asked why basic process validation had never been performed before distributing drugs, the company’s response was remarkable in its honesty: the AI agent never told them it was required.
On April 2, the FDA did something it had never done before. It issued a warning letter with a dedicated section titled “Inappropriate Use of Artificial Intelligence in Pharmaceutical Manufacturing,” creating a new standalone category of Current Good Manufacturing Practice (cGMP) deficiency. The recipient was Purolea Cosmetics Lab. The company has since halted drug production.
What Purolea Actually Did
The violations were straightforward. Purolea used AI agents to generate the documents that govern how drugs are made: specifications, procedures, master production records, and control records. These are the documents that determine whether a drug is safe to take. Under 21 CFR 211.22(c), a company’s Quality Unit must review and approve these documents.
Purolea didn’t do that. The AI generated the documents. The company used them as-is.
The second violation was worse. Under 21 CFR 211.100, manufacturers must validate their processes before distributing drugs. This is pharmaceutical manufacturing 101. When FDA investigators pointed out this hadn’t been done, the company’s response cut to the heart of a problem that extends far beyond one small lab: the owner said she was unaware the requirement existed because the AI agent she relied on had never surfaced it.
She was making drugs for human consumption based on whatever an AI chatbot decided to include in its output.
Why This Should Worry You
The Purolea case is small. The company isn’t a major pharmaceutical player. But the pattern it reveals is not small at all.
The FDA’s warning letter establishes a principle: AI-generated outputs in regulated industries require the same human review as any other tool output. Any recommendation from an AI agent must be “reviewed and cleared by an authorized human representative of the firm’s Quality Unit,” per Section 501(a)(2)(B) of the FD&C Act.
This sounds obvious. It should be obvious. But consider what happened here: a person running a business that manufactures products people put in their bodies delegated compliance to a chatbot and genuinely believed the chatbot’s silence on a topic meant the requirement didn’t exist. The absence of an AI-generated warning became, in her mind, evidence of safety.
This is not a unique cognitive failure. It’s the predictable result of treating AI systems as authorities rather than tools. When an AI agent confidently generates a specification document that looks professional and thorough, the natural human response is to trust it. The parts the AI gets right reinforce that trust. The parts it silently omits become invisible.
What’s Being Done (And Why It’s Not Enough)
The FDA has now drawn a line. AI-generated documents in pharmaceutical manufacturing require human review and approval. The agency treated AI misuse as a distinct violation category, signaling that similar enforcement actions will follow.
But the FDA’s framework addresses only the narrowest version of this problem. Pharmaceutical manufacturing has extensive written regulations that predate AI entirely. The requirement for process validation isn’t ambiguous or hard to find. The real question is what happens in domains where the rules are less codified, where “did you validate your process” doesn’t have a regulatory citation to point to.
Three in four organizations are now deploying agentic AI in some form, according to recent industry surveys. The ISACA’s 2026 AI Pulse Poll found that 56% of digital trust professionals don’t know how quickly they could halt an AI system after a security incident. Twenty percent of organizations don’t even know who would be responsible if an AI system caused harm.
The FDA can write warning letters about AI in drug manufacturing. But nobody is writing warning letters about AI agents making hiring decisions, approving loans, or triaging medical symptoms, areas where the consequences of a confident AI silently omitting something important are just as severe, and where the regulatory framework to catch it doesn’t yet exist.
Purolea’s defense was absurd. But the underlying assumption, that what the AI doesn’t flag must not be important, is becoming standard operating procedure across industries that should know better.