The week’s AI security news reads like a warning about what happens when powerful tools become widely available. Android malware is now using Gemini to navigate infected phones in real-time. A low-skill attacker used Claude and DeepSeek to compromise over 600 networks across 55 countries. And NIST has launched an emergency initiative to develop security standards for AI agents before things get worse.
Here’s what happened, who’s affected, and what it means.
PromptSpy: Android Malware Powered by Gemini
ESET researchers have discovered what they’re calling the first Android malware to use generative AI at runtime. The malware, dubbed PromptSpy, doesn’t just contain static AI-generated code - it actively queries Google’s Gemini during execution to figure out how to evade detection.
The technique is clever. PromptSpy sends Gemini XML snapshots of the device’s current screen along with natural language requests like “help me stay pinned in the recent apps list.” Gemini responds with JSON instructions specifying exactly where to tap and what actions to perform.
This matters because it means the malware can adapt to any device, any screen layout, any Android version. Traditional malware relies on hardcoded paths and element IDs that break when manufacturers customize their interfaces. PromptSpy sidesteps that entirely by asking an AI to figure it out.
Beyond persistence, the malware can capture lockscreen PINs, record screen activity as video, take screenshots, and deploy a VNC module for full remote access. It spreads through a fake JPMorgan Chase download site and appears to target users in Argentina, though ESET notes it was likely developed in a Chinese-speaking environment based on debug strings.
The good news: PromptSpy hasn’t appeared on Google Play and hasn’t been spotted in active campaigns yet. ESET believes it may still be a proof of concept. The bad news: the technique works, and now everyone knows it.
AI-Powered Network Compromise at Scale
Amazon Threat Intelligence published findings on a Russian-speaking threat actor who used commercial AI tools to compromise over 600 FortiGate firewall devices across 55 countries between January 11 and February 18.
The attacker wasn’t sophisticated. According to Amazon’s analysis, they had “limited technical capabilities” and repeatedly failed when attempting anything beyond basic automated attacks. Their own documentation recorded that many targets had “either patched the services, closed the required ports, or had no vulnerable exploitation vectors.”
What made the campaign effective was AI augmentation. The attacker used DeepSeek to generate attack plans from reconnaissance data and configured Anthropic’s Claude as a “coding agent” for vulnerability assessments. They systematically scanned FortiGate management interfaces exposed to the internet and used AI assistance to test weak credentials at scale.
No zero-days were involved. Every compromise exploited the same basic failures: management ports exposed to the internet and single-factor authentication with weak passwords. Post-compromise activities included Active Directory attacks, credential harvesting, and probing backup infrastructure - patterns consistent with ransomware preparation.
Amazon’s CISO CJ Moses put it bluntly: the attacker “achieved an operational scale that would have previously required a significantly larger and more skilled team.”
This is the scenario security researchers have been warning about. AI doesn’t need to discover new vulnerabilities to be dangerous. It just needs to help less-skilled attackers exploit existing ones faster.
BeyondTrust: 11,000 Exposed, Ransomware Active
A critical vulnerability in BeyondTrust’s Remote Support and Privileged Remote Access products is under active exploitation in ransomware attacks.
CVE-2026-1731 carries a CVSS score of 9.9 out of 10. It allows unauthenticated remote attackers to execute arbitrary operating system commands by sending specially crafted requests. The vulnerability was publicly disclosed on February 6, and exploitation began within 24 hours of the first public proof-of-concept appearing.
CISA added it to the Known Exploited Vulnerabilities catalog on February 13. Palo Alto’s Unit 42 has observed attackers deploying VShell and SparkRAT to gain persistence, move laterally, and maintain remote access to compromised systems.
Roughly 11,000 BeyondTrust Remote Support instances are exposed online, with around 8,500 on-premises systems potentially vulnerable if not patched. Cloud customers were automatically updated on February 2.
Patches are available: Remote Support 25.3.2+ and Privileged Remote Access 25.1.1+. If you’re running BeyondTrust products, check your version now.
NIST Launches AI Agent Security Initiative
On February 17, NIST’s Center for AI Standards and Innovation announced the AI Agent Standards Initiative, acknowledging that autonomous AI systems are now working “autonomously for hours” managing emails, writing code, and shopping for goods - and security frameworks haven’t kept up.
The initiative has three pillars: facilitating industry-led standards development, supporting open-source protocol development for agent interoperability, and funding research into AI agent security and identity.
NIST is soliciting public input through two channels:
- Request for Information on AI Agent Security (due March 9)
- AI Agent Identity and Authorization Concept Paper (due April 2)
Starting in April, NIST will hold listening sessions on sector-specific barriers to AI adoption, with AI agent security as a focus.
The timing isn’t coincidental. After watching PromptSpy use Gemini for evasion and low-skill attackers use Claude for network compromise, regulators are realizing that AI agent capabilities have outpaced the security infrastructure meant to contain them.
DeepSeek System Prompt Extracted
Security researchers at Wallarm successfully extracted DeepSeek’s entire system prompt using a jailbreak technique that exploited the model’s response logic.
The system prompt is the hidden set of instructions that governs how an AI model behaves and what restrictions it follows. Extracting it reveals the model’s operational rules and potentially exposes weaknesses in other models using similar architectures.
Wallarm notified DeepSeek, and the vulnerability has been patched. The researchers aren’t publishing the specific technique, concerned it might work against other large language models.
The extracted prompt revealed references to OpenAI models in DeepSeek’s knowledge base, adding fuel to ongoing questions about the origins of DeepSeek’s training data.
Following security concerns, Australia has banned DeepSeek from government devices, citing “unacceptable risks” to national security. Italy, Taiwan, South Korea, and France have implemented similar restrictions.
What This Means
This week’s incidents illustrate a shift in the AI security threat model.
The traditional concern was that AI might develop dangerous autonomous capabilities. The actual danger, at least right now, is simpler: AI tools are making existing attack techniques more accessible and more scalable.
PromptSpy doesn’t need to be particularly sophisticated because Gemini handles the hard part - figuring out how to navigate unfamiliar device interfaces. The FortiGate attacker didn’t need elite skills because Claude and DeepSeek helped bridge the gap. Both attacks exploited basic security failures that have existed for years.
NIST’s emergency initiative acknowledges what these incidents demonstrate: AI agents are already operating at scale in both legitimate and malicious contexts, and the security frameworks to govern them don’t exist yet.
What You Can Do
Android users: Don’t install apps from outside the Play Store. If you must, verify the source thoroughly. PromptSpy spread through a fake banking site that looked legitimate at first glance.
Network administrators: Audit your FortiGate and BeyondTrust deployments immediately. Check for:
- Management interfaces exposed to the internet (ports 443, 8443, 10443, 4443)
- Single-factor authentication
- Weak or default credentials
- Unpatched versions
BeyondTrust customers: Patch to Remote Support 25.3.2+ or Privileged Remote Access 25.1.1+. If you can’t patch immediately, consider taking exposed instances offline until you can.
Everyone deploying AI agents: Follow NIST’s Request for Information process if you have security concerns to raise. Standards developed now will shape how AI agents operate for years.
DeepSeek users: Consider the implications of using a model that several governments have banned from official systems. The system prompt extraction demonstrates that the model’s internal workings may not be as opaque as assumed.