Two Critical Flaws in Microsoft's AI SDK Could Let Attackers Hijack Enterprise AI Agents

Microsoft Semantic Kernel has back-to-back CVSS 10.0 vulnerabilities enabling remote code execution and arbitrary file writes through AI agent function calls

Microsoft’s Semantic Kernel - the SDK that Fortune 500 companies use to build AI agents - has two critical vulnerabilities with maximum severity scores. Both allow remote code execution through features specifically designed for AI agent operations.

Two CVSS 10.0 Vulnerabilities in Two Weeks

The first vulnerability (CVE-2026-25592) was disclosed on February 6, 2026. It allows arbitrary file writes through AI agent function calling in the .NET SDK. An attacker can exploit the DownloadFileAsync and UploadFileAsync functions in the SessionsPythonPlugin to write files anywhere on the system - a classic path traversal attack dressed up in AI agent clothing.

The second (CVE-2026-26030) dropped on February 19. This one targets the Python SDK’s InMemoryVectorStore filter functionality. Because filters execute as Python lambda functions in the same process as the main application, attackers with low-level access can inject code through “dangerous attribute names” that bypass the allowlist.

Both score CVSS 9.9-10.0. Both require only low privileges and no user interaction. Both can fully compromise affected systems.

What Is Semantic Kernel?

Semantic Kernel is Microsoft’s open-source SDK for building AI agents - the middleware layer that connects your enterprise systems to large language models. It’s the foundation of Microsoft’s broader AI agent strategy, recently merged with AutoGen into the “Microsoft Agent Framework.”

Microsoft explicitly markets it as “enterprise-grade.” Fortune 500 companies including BMW, KPMG, Citrix, TeamViewer, and Elastic have adopted it. The NuGet package alone has over a million downloads.

The SDK’s core purpose is giving AI agents the ability to call functions, access files, and execute code on your behalf. These vulnerabilities show what happens when that capability isn’t properly constrained.

The Technical Details

CVE-2026-25592 (File Write):

  • Component: SessionsPythonPlugin in .NET SDK
  • Attack vector: Malicious file paths passed to DownloadFileAsync or UploadFileAsync
  • Impact: Write arbitrary files anywhere on the system
  • Root cause: Path traversal (CWE-22) - the plugin doesn’t properly validate that file paths stay within intended directories

CVE-2026-26030 (Code Execution):

  • Component: InMemoryVectorStore in Python SDK
  • Attack vector: Malicious filter expressions that access dangerous attributes
  • Impact: Execute arbitrary code in the application context
  • Root cause: Code injection (CWE-94) - filter lambdas execute with full process privileges

Both vulnerabilities follow a common pattern in AI agent security: features designed for flexibility become attack surfaces when AI agents process untrusted input. Microsoft’s own documentation warns that “filters should not be set by untrusted sources, including by LLM inputs.” But that warning was clearly insufficient.

The Agent Security Problem

These aren’t isolated bugs. They’re symptoms of a fundamental tension in AI agent design: agents need capabilities to be useful, but those capabilities become dangerous when the agent processes adversarial input.

An AI agent using Semantic Kernel might:

  1. Receive a query from a user
  2. Use an LLM to determine what functions to call
  3. Execute those functions with parameters derived from the query

At each step, there’s an opportunity for injection. A prompt injection could manipulate the LLM into calling dangerous functions. A malicious file path or filter expression could exploit the functions directly. The attack surface is vast.

We’ve seen this pattern before:

Microsoft’s Semantic Kernel is supposed to be the enterprise-grade, security-conscious option. Yet here we are.

Who’s Affected

If you’re building AI agents with Semantic Kernel, check your versions:

PackageVulnerableFixed
semantic-kernel (Python)< 1.39.41.39.4+
Microsoft.SemanticKernel.Core (NuGet)< 1.70.01.70.0+
Microsoft.SemanticKernel.Plugins.Core (NuGet)< 1.71.01.71.0+

The Python SDK vulnerability affects anyone using InMemoryVectorStore with filters. The .NET vulnerabilities affect anyone using SessionsPythonPlugin with file operations.

What You Should Do

Immediately:

  1. Update to the patched versions listed above
  2. Audit any code paths where user input or LLM output flows into filter expressions or file operations

Short-term:

  • Implement Function Invocation Filters that validate all parameters against allowlists
  • Don’t use InMemoryVectorStore in production until you’ve verified you’re patched
  • Review Microsoft’s security samples for proper implementation patterns

Long-term:

  • Treat AI agent capabilities as attack surfaces
  • Assume LLM outputs are adversarial
  • Apply defense-in-depth: sandbox agents, limit permissions, validate everything

The Bigger Picture

Two maximum-severity vulnerabilities in Microsoft’s flagship AI agent SDK within two weeks should give pause to anyone deploying AI agents in production. These weren’t subtle timing attacks or theoretical concerns - they were fundamental failures to sanitize input before executing dangerous operations.

Microsoft’s patches arrived quickly. But the underlying problem remains: AI agent frameworks are trading security for capability, and the industry hasn’t figured out how to do both well.

If you’re building AI agents for enterprise use, you’re operating at the bleeding edge of a security model that’s still being figured out. Act accordingly.