78% of Companies Would Fail an AI Governance Audit

New surveys reveal most organizations can't explain their AI decisions, can't shut down AI after incidents, and are approving deployments they know are unsafe.

Person reviewing documents and charts at a desk with papers spread out

Three in four corporate boards have approved major AI investments. Half of those same boards haven’t set governance expectations for the technology they’re funding. That number comes from Grant Thornton’s 2026 AI Impact Survey, released in April, which surveyed nearly 1,000 senior US business leaders and found a gap between AI spending and AI accountability wide enough to drive a regulatory action through.

The headline number: 78% of respondents lack full confidence that their organization could pass an independent AI governance audit within 90 days. Not a surprise inspection. Ninety days of preparation, and four out of five still wouldn’t bet on passing.

The Numbers Paint a Consistent Picture

Grant Thornton isn’t alone in documenting this gap. Three major surveys released in the first quarter of 2026 converge on the same conclusion from different angles.

ISACA’s 2026 AI Pulse Poll, released at RSA Conference in March, surveyed more than 3,400 digital trust professionals — IT auditors, governance specialists, cybersecurity professionals, and privacy experts. The finding that stands out: 56% don’t know how quickly they could halt an AI system after a security incident. Of those who could give a timeline, 7% said it would take more than an hour. Less than half — 43% — have high confidence their organization could explain a serious AI incident to regulators.

Trend Micro’s global study of 3,700 business and IT decision-makers adds the human pressure dimension: 67% have felt pressured to approve AI despite security concerns. One in seven described those concerns as “extreme” but said they were overridden to keep pace with competitors. Fifty-seven percent said AI is advancing faster than their ability to secure it.

Lay these surveys side by side:

  • 78% can’t pass an AI governance audit (Grant Thornton)
  • 56% can’t say how fast they’d shut down AI after an incident (ISACA)
  • 67% have been pressured to approve AI they knew had security problems (Trend Micro)
  • 43% are confident they could explain an AI incident to regulators (ISACA)

These aren’t sampling the same population. Grant Thornton surveyed C-suite executives. ISACA surveyed technical governance professionals. Trend Micro surveyed business and IT decision-makers across 3,700 organizations globally. The consistency across different populations and methodologies makes the individual numbers harder to dismiss.

The Board Problem

Grant Thornton’s survey reveals a structural disconnect at the highest levels of corporate governance. Seventy-five percent of boards have approved major AI investments. But only 52% have set clear AI governance expectations, and just 54% have integrated AI risk into ongoing board oversight.

This means roughly one in four boards approved significant AI spending without establishing governance guardrails or ongoing risk monitoring for what they’re funding. They wrote the check and moved on.

The consequences show in the performance data. Organizations with fully integrated AI governance report revenue growth at four times the rate of those still piloting — 58% versus 15%. Companies that govern AI well don’t just avoid risk; they make more money. And yet, only 22% of operations leaders have fully developed and implemented an AI strategy. Half say they need a formal governance plan within six months just to maintain current performance.

“Companies are making tremendous investments into AI and yet, we’re not seeing that correlate with an increase in AI accountability,” said Tom Puthiyamadam of Grant Thornton. “AI deployment is simply outpacing the infrastructure that supports it.”

73% Scale Autonomous AI. 20% Tested What Happens When It Fails.

The most alarming gap in the Grant Thornton data is between autonomous AI deployment and failure preparedness. Seventy-three percent of surveyed organizations are piloting or scaling autonomous AI. Only 20% have tested their response plans for when those systems fail.

Autonomous AI systems — agents that take actions without human approval for each step — represent a qualitatively different risk category from standard AI tools. When a recommendation engine produces bad suggestions, someone can ignore them. When an autonomous agent executes a faulty decision in a financial transaction, supply chain operation, or customer interaction, the damage is done before anyone reviews it.

Running autonomous systems at scale without tested failure response plans is the organizational equivalent of deploying untested software to production and hoping the error logs are informative. Except in this case, most organizations also can’t tell you how fast they’d notice the problem.

The EU AI Act Enters an Unprepared World

The EU AI Act’s main enforcement window opens August 2, 2026 — roughly three months from now. Under Annex III, most AI used in recruitment, performance management, and workforce monitoring is classified as high-risk, requiring documented risk management, human oversight provisions, and technical documentation demonstrating compliance.

The Grant Thornton survey found 43% of respondents cite regulatory and compliance uncertainty as their top AI concern. ISACA found less than half are confident they could explain a serious AI incident to leadership or regulators.

The EU isn’t the only jurisdiction tightening requirements. California’s Executive Order N-5-26, issued March 30, directs state agencies to draft AI safety requirements. Multiple states are advancing AI governance legislation. The regulatory environment is converging toward accountability requirements that most organizations, by their own admission, cannot currently meet.

Forty-six percent of Grant Thornton’s respondents identified governance or compliance barriers as the top factor in AI underperformance. They know the gap exists. They know it’s costing them. And three months before major enforcement begins, four out of five still can’t pass an audit.

The industry’s response so far has been to deploy faster and govern later. The surveys suggest “later” is arriving, and the infrastructure to handle it isn’t.