56% of Security Teams Can't Tell You How Fast They'd Kill Their AI

ISACA surveyed 3,400 security professionals. Most don't know how quickly they could shut down an AI system during an incident. One in five doesn't know who's responsible.

Red emergency stop button on industrial machinery panel

Here’s a question every organization deploying AI should be able to answer: if your AI system starts doing something dangerous right now, how long until you can stop it?

ISACA asked 3,400 digital trust professionals — people in IT audit, governance, cybersecurity, and privacy roles — exactly that question. More than half of them couldn’t give an answer.

The Numbers

The 2026 AI Pulse Poll, released at RSA Conference 2026, surveyed professionals whose job titles explicitly include words like “security” and “governance.” These aren’t random employees. These are the people responsible for keeping AI systems under control. Here’s what they reported:

  • 56% don’t know how quickly they could halt an AI system during a security incident
  • 32% estimate they could shut it down within 60 minutes
  • 7% say it would take longer than an hour
  • 5% didn’t know at all

Only 43% expressed high confidence they could investigate a serious AI incident and explain it to leadership or regulators. Twenty-seven percent reported low to no confidence.

Nobody’s Watching, Nobody’s Responsible

The survey’s findings on human oversight are equally grim:

  • 36% say humans approve most AI-generated actions before execution
  • 26% review selected decisions after the fact
  • 11% only intervene when an alert fires
  • 20% don’t know what oversight mechanisms exist in their organization

That last number deserves to sit with you for a moment. One in five security professionals doesn’t know how their organization monitors AI decisions. These systems are already making choices that affect customers, finances, and operations, and the people tasked with securing them can’t describe the oversight model.

On accountability, the picture fractures further: 28% point to the board or executives, 18% say the CIO or CTO, 13% say the CISO, and 20% don’t know who holds responsibility for AI governance. When everyone is responsible, no one is.

The Disclosure Gap

AI disclosure — telling users and stakeholders when AI is involved — is supposed to be table stakes for responsible deployment. ISACA found:

  • 18% require and enforce AI disclosure in work products
  • 20% require it but don’t consistently enforce it
  • 32% have no disclosure requirements at all

So roughly half of organizations either don’t require disclosure or require it and ignore their own policy. This isn’t a maturity gap. This is a governance vacuum.

Why This Matters More Than You Think

The ISACA survey landed the same week as Stanford’s 2026 AI Index, which reported 362 AI incidents in 2025 — up from 233 in 2024 — with OECD monitoring showing monthly counts hitting 435 at the start of 2026. The incidents are accelerating. The capacity to respond isn’t keeping pace.

As Jenai Marinkovic of Tiro Security put it in the ISACA release: “AI brings tremendous promise, but also enormous risk related to security and privacy.” That’s a polite way to say the industry shipped first and built the kill switch later. Or in 56% of cases, hasn’t built the kill switch at all.

The safety conversation in AI fixates on alignment research, RLHF training, and constitutional AI — sophisticated technical approaches to making models behave. But the boring operational question matters just as much: when the model misbehaves, can you stop it? For most organizations, the honest answer is “we don’t know, and we’re not sure who’d make the call.”

That’s not a research problem. That’s an organizational failure happening at scale, and it’s happening right now.