iPrompt Deep Dive

Companion piece to iPrompt Wednesday Issue #135.

PUBLISHED WEDNESDAY, 13 MAY 2026 · BY R. LAURITSEN · 10 MIN READ

Why the CAIO can’t save you

The fourteen enterprise functions that have to be rebuilt before agentic AI is safe to deploy — and the three the org-chart approach gets wrong by default.

When Google’s Threat Intelligence Group published their report on the first AI-built zero-day this week, I read it twice. Not for the headline — that was already in every newsletter on Wednesday. I read it for one sentence buried in the analysis: that the criminal crew didn’t need any specialised infrastructure to find this flaw. They used a frontier model and patience.

Somewhere in the Fortune 500 this morning, a Chief AI Officer is preparing a slide deck for next quarter’s board review. Their governance framework is thorough. It has passed three internal reviews. It will not have prevented the breach that, statistically, is now coming — because that breach won’t come from an unmonitored model or a careless prompt. It will come from a workflow that wasn’t on the inventory. A vendor’s AI feature that quietly turned on in April. A trust assumption nobody documented because nobody knew there was a category called “trust assumption” to document.

The post-mortem will use the word governance a lot. The word it should use — the one that names what was actually missing — is detection. That’s the gap this piece is about.

The case the news made this week

Three signals from this week’s AI cycle line up uncomfortably.

Google’s Threat Intelligence Group confirmed — with “high confidence” — that a criminal crew used an LLM to find and weaponise a 2FA bypass in a popular open-source admin tool. The first weaponised, AI-built zero-day in the wild. GTIG’s structural finding underneath the incident matters more than the incident itself: LLMs can now spot the kind of semantic logic flaws — hidden behind hardcoded trust assumptions — that fuzzers and static scanners are structurally built to miss.

In the same week, IBM published survey data showing that 76% of large enterprises have hired a Chief AI Officer, up from 26% in 2025. And from the same survey: only 32% of those enterprises report sustained, organisation-wide AI impact. Two-thirds are still in pilots.

Take a minute on the implication. The capability to find logic flaws no human scanner catches is now widely available — to anyone with API access and time. The capability to defend against it sits with two-thirds of enterprises that haven’t moved past pilot phase. The gap between those two capabilities is the entire story. And it’s widening, not closing.

The standard enterprise response to that gap, in 2026, is to appoint a Chief AI Officer. I think that response is structurally wrong — not because the people taking these roles are wrong (most of them are very good), but because the role itself was designed for a different threat curve than the one we now live in.

Three things the org chart gets wrong by default

1. Authority over policy, no authority over capability

The CAIO writes the AI policy. The security team owns the actual surface area. The engineering leads own the deployment decisions. The product team owns which models touch which data.

In most org charts, none of those teams report to the CAIO. The CAIO is one structural step removed from every system that matters. They can ask for changes; they can’t make them. Which means the moment the threat moves faster than the request-for-change cycle, the CAIO’s authority becomes ornamental. And the threat is already moving faster than the request-for-change cycle. Google just proved it.

2. AI defence is mostly uncountable. The role isn’t built for that.

Every other C-suite role produces visible deliverables. The CFO produces statements. The CMO produces campaigns. The CRO produces revenue. Boards know how to read those outputs because they generate the kind of evidence boards are trained on.

The hardest, most valuable work of AI defence is structurally invisible. Killing a redundant LLM integration produces no artefact. Blocking a vendor’s opt-in AI feature produces no slide. Tightening the contract clause on training-data usage produces no metric. Each of those actions shrinks the attack surface — none of them shows up in a quarterly review.

So CAIOs gravitate, predictably, to the work that is visible: frameworks, policy documents, governance heatmaps, vendor risk matrices. The work that produces deliverables. The role is structurally pulled toward the wrong work — not by bad intent, but by the incentives every C-suite role carries. The thing that would defend the company hardest is also the thing the role gets the least credit for.

3. The CAIO model treats AI as a vendor relationship; it’s a capability

In 1998, no serious company appointed a Chief Internet Officer to govern the internet. They built engineers who could ship on it. The companies that confused vendor management with capability spent the next decade losing to the companies that didn’t.

The CAIO architecture suggests AI is a thing your company uses — and so needs a relationship manager. The threat curve says AI is a muscle your company has, or doesn’t. Defending against AI-built exploits requires running offensive AI in-house. You can’t outsource that. You can’t govern it externally. You either have the capability or you don’t, and if you don’t, no procurement contract closes the gap.

The fourteen functions to rebuild

If the CAIO role is structurally one step removed from the systems that matter, then the rebuild question isn’t who owns the AI policy — it’s which functions have to exist somewhere in the organisation, and which of those functions almost certainly don’t exist yet.

Below are fourteen, grouped into four buckets. Take this list to your CAIO. Ask which ones your company has in production today. Most companies will fail on ten or more.

AI-NATIVE = new because of AI. PRE-AI = not new, but now matters ten times more.

BUCKET A — DETECTION — WHAT THE CAIO CAN’T SEE

BUCKET B — SURFACE AREA — WHAT THE CAIO CAN’T SHRINK

BUCKET C — VENDOR POSTURE — WHAT THE CAIO DOESN’T CONTROL

BUCKET D — CAPABILITY — WHAT THE CAIO IS STRUCTURALLY WRONG TO OWN

What to do this week

If you’re an operator reading this and your company has a CAIO, you’re not going to dismantle the role this week. That’s not the call. The call is: build the muscle the CAIO can’t.

Three concrete moves, in order of leverage:

Run the inventory above this week. By hand. Take the fourteen items, ask which exist in production today. Pay particular attention to the eight AI-NATIVE ones — those are where the gap is widest. The exercise itself is the deliverable; most companies have never done it.

Pick the single highest-risk surface from Bucket B. Run the Adversarial Audit prompt from this week’s newsletter against it. Reply with what surfaced; the most uncomfortable findings will anchor next week’s analysis.

Have a conversation with your CAIO that includes this list. Not as a critique of the role — as a calibration of where the role’s authority actually reaches. The honest answer to “which of these do we own?” will tell you more than any framework.

The honest version

The CAIO is the right role for 2024. We are not in 2024 anymore.

It was a sensible response to the question that mattered most two years ago — what should we do about AI? The role was built for governance because that was the bottleneck. Policy, vendor selection, employee guidelines, ethical frameworks. Useful work. Necessary work.

The question that matters most now is different. It’s what is moving against us, and how fast can we move back? That question needs a different muscle, a different reporting line, and a different ratio of policy to capability than the CAIO role was designed to deliver. Asking it of an existing CAIO isn’t a critique. It’s a calibration.

Governance is downstream of detection. You can’t audit your way out of a capability gap. The companies that survive the next eighteen months will be the ones whose security and engineering teams are already running offensive AI in-house — whose CAIO, if they have one, is the second-most-important AI hire in the building, not the first. Most companies will get that ranking exactly backwards. That backwardness is the opportunity for everyone who reads this.

— R. Lauritsen

If this argument changed how you think about your AI org, forward it to whoever currently owns AI strategy in your company. The conversation is more useful with two people in the room.

Benchmark against 2,000+ private B2B SaaS and AI companies

Is your growth in-line with your peers in B2B SaaS & AI? 

Benchmark yourself against actual billings data for Maxio’s 2000+ global customers

Key takeaways from the report: 

  • Average growth across 2,000 companies

  • Growth by revenue band 

  • AI-led vs AI-enhanced. Who performed better?