Security researchers have discovered two critical vulnerabilities in Chainlit, a popular open-source Python framework used to build AI chatbots and conversational applications. The flaws, dubbed “ChainLeak,” can be chained together to steal cloud credentials, API keys, and sensitive user data from enterprise deployments.
The vulnerabilities affect Chainlit’s element handling system, which manages file attachments like PDFs and images in chat conversations. Both require only basic authentication to exploit and need no user interaction.
The Vulnerabilities
CVE-2026-22218 (CVSS 7.1) is an arbitrary file read flaw in the /project/element endpoint. Chainlit validates that an element type equals “custom” but fails to check other properties. Attackers can set the path property to any server-accessible file, causing Chainlit to copy its contents into their session.
CVE-2026-22219 (CVSS 8.3) is a server-side request forgery bug in deployments using SQLAlchemy backend. When an element is created with a controlled URL, the server fetches that URL and stores the response. This lets attackers make the server send HTTP requests to internal networks or cloud metadata endpoints.
“If the ‘path’ property of the element is set, it will always be copied to our user session files, regardless of the element type,” Zafran researchers explained. “Once an attacker gains arbitrary file read access on the server, the AI application’s security quickly begins to collapse.”
How the Attack Works
The two flaws combine into a devastating attack chain:
- Steal environment variables by reading
/proc/self/environ, which exposes API keys, authentication secrets, and cloud credentials - Access AWS credentials via SSRF against the IMDSv1 metadata endpoint at
169.254.169.254, enabling lateral movement through cloud accounts - Dump database contents using exposed DATABASE_URL credentials
- Leak all user conversations when LangChain caching stores prompts and responses in
.chainlit/.langchain.db - Forge authentication tokens with stolen CHAINLIT_AUTH_SECRET values
Widespread Impact
Chainlit has over 7.3 million total downloads, with roughly 220,000 downloads weekly. The researchers confirmed multiple internet-facing vulnerable servers operated by large enterprises and academic institutions.
The framework is commonly used for customer support bots, enterprise AI copilots, knowledge management systems, and educational assistants. These deployments often handle sensitive information and connect to internal APIs with privileged credentials.
This discovery prompted Zafran to launch “Project DarkSide,” a research initiative examining security weaknesses across AI application framework layers. Early findings suggest these issues aren’t unique to Chainlit - Microsoft’s MarkItDown MCP server contains a similar SSRF flaw, and analysis of 7,000+ MCP servers found 36.7% likely exposed to comparable risks.
Timeline and Fix
Zafran responsibly disclosed the vulnerabilities to Chainlit maintainers on November 23, 2025. Patches landed on December 24, 2025 in version 2.9.4.
What to Do
If you use Chainlit:
- Upgrade immediately to version 2.9.4 or later (current release is 2.9.6)
- Audit your deployments for exposed environment variables and credentials
- Rotate all secrets that may have been accessible to the vulnerable service
- Enable IMDSv2 on AWS EC2 instances to block metadata endpoint SSRF
- Review access logs for suspicious
/project/elementrequests
If you’re building AI applications:
- Inventory your dependencies and check for similar framework vulnerabilities
- Assume AI frameworks are attack surfaces - they handle user input and often have elevated privileges
- Implement defense in depth with network segmentation and least-privilege access
The AI tooling ecosystem is moving fast, and security isn’t keeping pace. Every new framework, wrapper, and integration is a potential entry point. ChainLeak is a reminder that the infrastructure powering AI chatbots deserves the same scrutiny as any other critical system.