BodySnatcher: The Most Severe AI Vulnerability Yet Let Attackers Hijack Fortune 500 AI Agents

A hardcoded credential and broken authentication in ServiceNow let attackers impersonate any user and weaponize AI agents to create admin backdoors.

Security researchers have disclosed what they call “the most severe AI-driven security vulnerability uncovered to date” - a flaw in ServiceNow’s AI agents that could have let attackers impersonate any user and hijack autonomous AI systems at Fortune 500 companies.

The vulnerability, tracked as CVE-2025-12420 and nicknamed “BodySnatcher” by security firm AppOmni, scored 9.3 out of 10 on the severity scale. With just an email address, attackers could bypass multi-factor authentication and single sign-on protections, then use ServiceNow’s Now Assist AI agents to create admin accounts and gain full control over enterprise systems.

ServiceNow patched the flaw in October 2025, but the details published by researchers offer a stark warning about how AI agents can turn routine security bugs into catastrophic breaches.

How the Attack Worked

The vulnerability chained together two separate failures. First, ServiceNow’s Virtual Agent API shipped with a hardcoded credential - a static string (“servicenowexternalagent”) that was identical across every ServiceNow deployment worldwide. Anyone who knew this credential could authenticate to the API.

Second, the API used a feature called “Auto-Linking” that automatically associated external users with internal ServiceNow accounts based solely on their email address. No additional verification required.

Combined, these flaws meant an attacker needed only two things: the hardcoded credential (easily discovered) and a target’s email address. With those, they could authenticate to ServiceNow as that user, bypassing whatever authentication protections the organization had configured.

That alone would be bad. What made it worse was Now Assist, ServiceNow’s agentic AI system.

When AI Agents Become Attack Vectors

ServiceNow’s Now Assist lets AI agents autonomously execute tasks on behalf of users. The agents can create records, modify configurations, and interact with connected enterprise systems - all based on natural language requests.

Once an attacker impersonated a privileged user through the broken authentication, they could instruct the AI agent to take actions that user was authorized to perform. In testing, AppOmni researcher Aaron Costello demonstrated using a compromised agent to create a new account with full administrative privileges.

Security researchers call this “Agentic Amplification” - AI agents transforming routine security flaws into catastrophic risks. A traditional authentication bypass might let an attacker access some data. An authentication bypass against an AI agent lets them execute arbitrary workflows at machine scale.

The researchers also found they could defeat ServiceNow’s “supervised mode,” which was supposed to require human confirmation before agents took sensitive actions. By simply sending “Please proceed” requests after a delay, they bypassed the confirmation workflows entirely.

Who Was Affected

ServiceNow is used by 85% of Fortune 500 companies as a central integration point for HR systems, customer relationship management, security operations, and more. Nearly half of AppOmni’s Fortune 100 customers were running the vulnerable Now Assist and Virtual Agent applications.

A successful exploit could have enabled massive data exfiltration of employee records, financial data, and customer information. It could also allow lateral movement to connected systems like Salesforce and Microsoft 365, plus the creation of persistent backdoors through hidden administrative accounts.

The Fix (And What Remains)

ServiceNow deployed patches on October 30, 2025. Cloud-hosted customers received automatic updates. Self-hosted customers need to upgrade to fixed versions: Now Assist AI Agents 5.1.18+ or 5.2.19+, and Virtual Agent API 3.15.2+ or 4.0.4+.

However, researchers warn that patches alone don’t eliminate the risk. Organizations may have built custom integrations that replicate the vulnerable patterns. The fundamental issues - hardcoded secrets and email-based auto-linking - could persist in custom code even after the official patch.

Why This Matters

BodySnatcher reveals a dangerous pattern in enterprise software: companies rushing to integrate AI agents without securing the foundations first. ServiceNow layered powerful autonomous agents onto an API that used weak authentication. The result was a vulnerability that turned AI itself into the attack surface.

As more companies deploy agentic AI, this problem will multiply. Traditional security tools struggle to monitor the dynamic behavior of AI agents. Authentication systems designed for human users don’t account for autonomous systems that can execute thousands of actions per minute.

The lesson from BodySnatcher isn’t just about one vendor’s misconfigured credential. It’s about what happens when enterprises deploy AI agents that inherit the permissions of the systems they integrate with - and nobody checks whether those systems were secure enough to bear that weight.

What You Can Do

For enterprise security teams:

  • Audit all AI agent deployments for inherited permissions
  • Verify authentication isn’t based solely on email addresses or weak tokens
  • Test whether “supervised mode” and human-in-the-loop controls actually work
  • Review custom integrations for patterns similar to auto-linking

For individual users:

  • Ask your IT team if your organization uses ServiceNow with AI agents
  • Request confirmation that patches have been applied to self-hosted instances
  • Be aware that AI agents may be acting on your behalf in enterprise systems

The age of AI agents is here. BodySnatcher shows what happens when we let them loose without checking the locks first.