77% of Security Teams Say They're Unprepared for AI Agent Attacks

A new Darktrace report finds most organizations lack formal AI security policies, even as attack volumes surge and AI agents gain employee-level access across enterprises.

Security teams know they’re outmatched. They’re deploying AI anyway.

Darktrace’s 2026 State of AI Cybersecurity Report, released today, surveyed 1,540 cybersecurity professionals across 14 countries. The numbers tell a story of an industry caught between competitive pressure and defensive capacity.

Start with the headline finding: 76% of security professionals are concerned about AI agent security risks. More than three-quarters of the people responsible for protecting these systems are worried about their ability to do so.

But that concern isn’t translating into action. Only 37% of organizations have formal policies for securely deploying AI. That’s down eight percentage points from last year.

Let that sink in. As AI agents gain access to sensitive data and business-critical systems, governance is actually declining.

The Attack Volume Problem

The report documents a security environment under siege. Among respondents:

  • 87% report AI is significantly increasing attack volume
  • 89% say attacks are becoming more sophisticated
  • 91% note AI is supercharging phishing and social engineering
  • 73% say AI-powered threats are already significantly impacting their organization

The top threat identified? Hyper-personalized phishing at 50%, followed by automated vulnerability scanning (45%), adaptive malware (40%), and deepfake voice fraud (39%).

Traditional security models were built for human-speed attacks. They assumed adversaries needed time to research targets, craft convincing messages, and manually exploit vulnerabilities. AI eliminates those constraints.

Hadrian’s research on AI-driven cyberattacks puts it bluntly: two out of three CISOs identify AI-driven threats as their top concern for 2026. CEO Rogier Fischer warned that “traditional defensive cybersecurity will no longer be sufficient in an AI-first world.”

The Preparedness Paradox

Here’s where the data gets uncomfortable.

Despite 92% of respondents saying AI threats are driving major upgrades to their defenses, 46% still feel unprepared to defend against AI-driven attacks. That’s nearly half of security professionals admitting their organizations can’t handle the threat they’re facing.

The World Economic Forum calls this the “preparedness paradox”. Organizations recognize AI risks, understand the stakes, and still deploy faster than they can secure.

Competitive pressure explains part of it. Nobody wants to be the company that moved too slowly on AI while competitors captured efficiency gains. But the security debt is accumulating.

A Vanta survey of 2,500 business and IT leaders found that nearly three-quarters believe AI threats are outpacing their ability to manage them. Separate research shows 65% say their use of agentic AI outpaces their understanding of it. Only 48% have a framework for granting or limiting autonomy in AI systems.

Microsoft’s 2026 Data Security Index confirms the pattern: companies are rapidly deploying generative and agentic AI while data security controls struggle to keep pace.

The Governance Collapse

The 37% figure on formal AI security policies deserves closer examination.

This means nearly two-thirds of organizations are deploying AI systems without formal security frameworks. No documented procedures for access controls. No standardized monitoring requirements. No clear accountability chains.

And the trend is moving in the wrong direction. Last year, 45% of organizations had formal policies. This year, 37%. An eight-point decline during a period when AI deployment accelerated dramatically.

The Darktrace report identifies specific governance failures:

  • Data exposure is the top risk cited (61%)
  • Privacy and security regulation violations follow at 56%
  • Misuse or abuse of AI tools at 51%

These aren’t exotic attack scenarios. They’re the predictable consequences of deploying systems without adequate controls.

Darktrace’s own network data illustrates the problem. In October, the company observed a 39% month-over-month increase in anomalous data uploads to generative AI services. The average anomalous upload was 75MB - roughly 4,700 pages of documents. That’s sensitive information leaving organizations through AI tools, often without security teams even knowing.

When AI Has Employee Access

The report highlights a fundamental shift in the threat model.

“Enterprises are embracing AI fast, and while AI tools are helping security teams better defend against attacks, agentic AI introduces a new class of insider risk,” said Issy Richards, VP of Product at Darktrace. “These systems can act with the reach of an employee - accessing sensitive data and triggering business processes - without human context or accountability.”

This is the core problem. AI agents aren’t just tools anymore. They’re autonomous systems with credentials, API access, and the ability to take actions that affect business operations. They can read documents, send emails, modify databases, and trigger workflows.

An employee with that level of access would require background checks, onboarding, ongoing monitoring, and clear accountability structures. AI agents are being deployed without equivalent safeguards.

Richards added that “if AI agents are operating inside your organization, their governance, access controls, and monitoring are a board-level responsibility, not just a technical one.”

That framing matters. Security teams can’t solve this alone. The decisions that create AI security risk - which tools to deploy, what access to grant, how much autonomy to allow - are business decisions made at the executive level.

The C-Suite Disconnect

The World Economic Forum’s research reveals a troubling gap between how different leaders perceive AI security.

Among chief executives, 67.1% trust AI tools to help make cybersecurity decisions. Among CISOs - the people actually responsible for security - only 58.6% feel the same. CISOs are also significantly less confident than CEOs that AI will strengthen cyber defenses (19.5% vs. 29.7%).

This confidence gap has consequences. CEOs push for faster AI adoption based on optimistic assumptions about risk. CISOs lack the organizational authority to slow deployment until security catches up.

The Darktrace data on declining formal policies suggests CISOs are losing this internal battle. Organizations are choosing speed over security governance.

What Would Help

The report isn’t entirely pessimistic. It documents genuine progress in how security teams are using AI defensively.

Among respondents, 77% have integrated generative AI into their security stack. And 96% say AI significantly boosts their speed and efficiency. The technology isn’t just creating problems - it’s also helping solve them.

Security teams say AI delivers its greatest value where human analysts struggle most: detecting novel threats and identifying anomalies at speed. Seventy-two percent cite this as the area where AI provides the greatest impact.

The challenge is translating those defensive capabilities into organizational readiness. That requires:

Formal policies that keep pace with deployment. If you’re adding AI tools, you should be adding governance at the same rate. The eight-point decline in formal policies represents a collective failure to maintain this balance.

Board-level accountability. AI security can’t be delegated exclusively to security teams. The access decisions that create risk happen at the business level.

Visibility into AI behavior. You can’t secure what you can’t see. Organizations need monitoring capabilities that track how AI systems access data and what actions they take.

Defined autonomy boundaries. The Darktrace report found that 14% of organizations allow AI to act independently in security operations, while 70% enable AI to take action with human approval. Only 13% keep AI limited to recommendations. Those boundaries should be explicit choices, not defaults.

The Bottom Line

The Darktrace report documents an industry that understands the problem and hasn’t solved it.

Security professionals are concerned about AI agent risks. They’re experiencing increased attack volumes. They know their defenses are inadequate. And yet governance is declining while deployment accelerates.

This isn’t a technology gap. The defensive tools exist. It’s an organizational gap. Companies are making business decisions about AI deployment without corresponding decisions about AI security.

The 77% concern figure and the 37% policy figure tell the same story from different angles. Most security professionals are worried about AI agents. Most organizations haven’t formalized how to deploy them safely.

That gap is where the breaches will happen.


The full Darktrace report is available at darktrace.com.