Security researchers at Check Point have demonstrated a technique that turns AI assistants into covert command-and-control channels for malware. The attack, disclosed on February 18, exploits web browsing features in Microsoft Copilot and xAI’s Grok to relay commands between attackers and compromised machines - with traffic that looks indistinguishable from normal AI usage.
The most concerning part: it works without an API key or registered account. Traditional malware that uses cloud services for command-and-control can be shut down by revoking API access. This technique sidesteps that defense entirely.
How It Works
The attack, which Check Point calls “AI as a C2 proxy,” exploits a specific feature: the ability of AI assistants to fetch and summarize web content.
Here’s the attack chain:
- An attacker compromises a target machine through conventional means (phishing, exploit, supply chain attack)
- The malware opens a hidden browser instance pointing to Copilot or Grok
- The malware instructs the AI to “summarize” an attacker-controlled URL
- That URL contains hidden commands embedded in HTML content
- The AI fetches the page and returns the content - including the embedded commands
- The malware parses the AI’s response and executes the instructions
- Data can be exfiltrated by encoding it into URL query parameters for the AI to fetch
The researchers built a proof-of-concept using WebView2, Microsoft’s embedded browser component. The malware opens invisible browser instances, injects prompts via URL parameters (for Grok) or JavaScript (for Copilot), and extracts commands from the AI’s responses.
Check Point demonstrated full bidirectional communication: commands flowing to the malware and stolen data flowing back to the attacker.
Why This Is Hard to Detect
Traditional command-and-control traffic triggers security alerts because it contacts known-bad servers or displays unusual patterns. This technique generates traffic to microsoft.com and x.com - domains that are explicitly whitelisted in most enterprise environments.
“Traffic to AI domains slips past firewalls and monitors since it mimics benign activity,” Check Point noted. The requests look exactly like a user asking Copilot or Grok a question. Security teams would have to inspect the actual content of AI conversations to spot the attack.
The researchers also demonstrated how attackers can encode or encrypt data within URL parameters to bypass content inspection. A URL like siamese-cats-fanclub.com/?data=SGVsbG8gV29ybGQ= doesn’t look obviously malicious, but that query string could contain encoded system information or stolen credentials.
What Makes This Different
AI-based command-and-control represents a shift from static to dynamic malware. Check Point described the potential for “AI-Driven implants” - malware that uses AI not just for communication, but for decision-making.
An attacker could instruct the AI to analyze information about a compromised host and determine whether it’s worth exploiting further. The AI becomes “an external decision engine” that helps the malware adapt its behavior in real time without any hardcoded logic.
This isn’t theoretical. The researchers demonstrated how prompts could instruct the AI to evaluate system details and return different commands based on what it finds. A malware sample that contains zero suspicious code - just prompts - would be significantly harder to detect through static analysis.
Vendor Responses
Check Point responsibly disclosed the vulnerability to both Microsoft and xAI before publication.
Microsoft confirmed the findings and implemented changes to Copilot’s web fetch behavior, though the company didn’t specify what changed. Microsoft’s statement emphasized that “attackers may attempt to communicate using a variety of available services, including AI-based services” and recommended organizations implement “defense-in-depth security practices.”
xAI has not publicly commented on changes to Grok.
The Bigger Picture
This attack relies on a prerequisite: the attacker must first compromise the target machine. Check Point hasn’t observed this technique in active campaigns yet.
But the implications extend beyond this specific attack vector. Every AI service with web browsing capabilities is a potential relay point. As AI assistants integrate deeper into enterprise workflows - accessing internal documents, browsing intranets, connecting to business applications - the attack surface expands.
Check Point’s concluding observation: “As AI continues to integrate into everyday workflows, it will also integrate into attacker workflows.”
What You Can Do
Treat AI service domains as high-value egress points. Traffic to copilot.microsoft.com and x.com/grok should be monitored with the same scrutiny applied to other cloud services that could be abused for data exfiltration.
Monitor for automated AI access. Unusual patterns of AI service usage - particularly from processes that aren’t typical browser applications - could indicate malware using AI as a C2 channel.
Update incident response playbooks. If you detect AI-based C2 in your environment, traditional approaches like domain blocking won’t work. The traffic is going to legitimate services.
Review AI deployment policies. Organizations allowing AI assistants with web browsing capabilities should understand that this feature creates potential abuse vectors.
Keep systems patched. This attack requires initial compromise. Standard security hygiene - patching, endpoint detection, email filtering - remains the first line of defense.
The researchers’ proof-of-concept is a warning shot. AI services are becoming infrastructure - and like all infrastructure, they can be weaponized.