What you get in this FREE Newsletter
In Today’s 5-Minute AI Digest. You will get:
1. The MOST important AI News & research
2. AI Prompt of the week
3. AI Tool of the week
4. AI Tip of the week
…all in a FREE Weekly newsletter.
Sponsor:
Stop making AI decisions in the dark. Understand AI usage.
Leadership is asking: are we getting value from AI? Which tools are worth the spend? Where are we exposed? Right now, most teams have no idea.
Harmonic Security Usage Explorer changes that. It automatically classifies every AI interaction across your organization into the use cases driving real work, specific to your business. Not generic categories. Not raw prompts. Actual patterns to understand: how your teams are using AI, how much time they spend in AI, the cost, and where risk lives.
CIOs get the data to rationalize spend and cut wasted licenses. CISOs get risk in context. AI committees get proof of impact.
Early access is now open to a limited number of organizations. Request your spot.
iPrompt Wednesday
The AI newsletter that turns news into action.
ISSUE #135 · WEDNESDAY, 13 MAY 2026 · 7 MIN READ
THE HOOK
A hallucinated severity score, sitting inside a malware script. A self-help menu, the kind that appears inside code that’s supposed to be quiet. That’s how Google’s threat hunters knew an AI had written it — and that the first weaponised, AI-built zero-day in the wild had just been caught, days before its launch.
AI News Roundup
🚨 Google catches the first AI-written zero-day in the wild
REPORTED — GOOGLE GTIG, 11 MAY
Google’s Threat Intelligence Group says it has “high confidence” a criminal crew used an LLM to find and weaponise a 2FA bypass in a popular open-source admin tool. The script gave itself away forensically: a hallucinated CVSS score, textbook Python formatting, a self-help menu inside attacker code. GTIG’s structural finding sits underneath the incident — LLMs can now spot semantic logic flaws (the kind hidden behind hardcoded trust assumptions) that fuzzers and static scanners are structurally built to miss. Read the GTIG report →
What this means for operators: if your security stack is signature- or pattern-based, you have a blind spot in the category of bug AI is best at finding. That gap is now active, not theoretical.
💰 Anthropic locks in $200B with Google Cloud
REPORTED — REUTERS / THE INFORMATION, 5 MAY
Five years, five gigawatts of TPU capacity from 2027 — and per The Information, that single contract is over 40% of Google’s entire reported cloud backlog. Add OpenAI’s commitments and you’re at roughly half of a ~$2 trillion backlog across the hyperscalers. The whole AI infrastructure economy now rests on two customers. Reuters →
What this means for operators: AI infrastructure has moved into the same threat model as semiconductors and rare earths — the Pentagon’s “supply chain risk” designation for Anthropic in March, and the seven-vendor classified-network contracts on 1 May, are the leading indicators. Multi-vendor LLM strategy isn’t cost optimisation now. It’s continuity planning.
👔 76% of large enterprises have hired a Chief AI Officer
REPORTED — IBM IBV SURVEY, VIA CNBC
Up from 26% in 2025 across 2,000+ organisations. Gartner’s Jonathan Tabah told CNBC the role probably won’t go fully mainstream — too expensive, too fuzzy. The deeper signal sits in what most CAIO mandates actually contain: governance, policy, vendor selection. Not capability. Companies are treating AI as a department to oversee, not a muscle to build. Those two postures produce very different five-year outcomes. CNBC →
🤖 IBM Think 2026: agentic AI hits the delivery gap
REPORTED — IBM, 5 MAY
Arvind Krishna’s Boston keynote unveiled the next-gen watsonx Orchestrate — an “agentic control plane” running agents from any vendor (IBM, Claude, GPT, custom). The pitch was tidy. IBM’s own data was not: only 32% of surveyed enterprise leaders report sustained, organisation-wide AI impact. Two-thirds are still in pilots — meaning agentic readiness, for most companies, is a slide in a deck, not a system in production. IBM recap →
OUR ANGLE
🔭 Two clocks are ticking. One is yours. The other isn’t.
This Week’s Move
If you do one thing from this issue, do this.
Why it works: generic “find vulnerabilities” prompts get generic OWASP lists back. This one forces a specific persona, gives explicit permission to think adversarially, and demands concrete exploit chains rather than checklists. The “hardcoded trust assumption” framing is lifted directly from GTIG’s report — the exact category of flaw LLMs are best at surfacing, and the exact one no scanner in your stack is built to catch.
Works best on: Claude Sonnet 4.6 or GPT-5.5. Don’t use a reasoning-light model — you want the model thinking, not autocompleting.
If you run it, reply with what surfaced. The most uncomfortable findings from this week’s readers will anchor next week’s deep dive.
ALSO WORTH KNOWING
Two supporting pieces, in case the Audit surfaces what we expect it to.
🛠️ Tool of the Week — Lakera Guard
★ ★ ★ ★ ½ / 5
If the Audit tells you what most operators end up hearing — that user-input-to-model is your softest surface — Lakera Guard is the cleanest place to start patching it. I’ve looked at most of the runtime-protection options in this space. This one wins on narrowness.
It’s a single API call between user input and your model. Catches prompt injection, jailbreaks, PII leakage in outputs. Drops into an existing app in an afternoon — actual install time, not a marketing claim.
What I like: it’s deliberately narrow. It does one job — runtime filtering — and stays out of your way. What’s missing: the free tier is a real free tier (rare in this space), but enterprise pricing is quote-based — they’ll want a call before you see a number.
Honest take: if you ship anything customer-facing on top of Claude or GPT, you should already have this deployed. If you haven’t, do it this week. lakera.ai →
💡 Tip of the Week — Your 60-second AI surface inventory
Before the Adversarial Audit can help you, you need to know what to point it at. Almost nobody has actually mapped this. Answer these six questions, by hand, in the next sixty seconds. Whatever you find will surprise you:
Q1. Which of your internal tools currently send untrusted text (customer messages, scraped content, uploaded documents, search results) into a model that can then call an API, modify a database, or send an email?
Q2. Which of those have any filtering between the input and the model? (Not “we trained the model not to do bad things.” Actual filtering.)
Q3. Which third-party AI features that your team uses (Notion AI, Slack AI, GitHub Copilot, browser-based assistants) have access to data you wouldn’t paste into a public chatbot?
Q4. Which of your vendors’ products quietly added LLM features in the last six months — and what data are they now training on by default?
Q5. If a vendor’s LLM hallucinates something into a contract, a CV, an invoice, or a support ticket, what’s the first step in your process that would catch it? (“A human will notice” is not an answer.)
Q6. Of all the above, which surface is highest-risk — and is anyone’s explicit job to defend it?
Why it works: the question that ends most security conversations is “where’s our exposure?” It’s an honest question, and most teams can’t answer it for AI. This is the shortest possible version of that exercise. Now point the Adversarial Audit at whatever answer scared you most in Q6.
YOUR MOVE
🏁 Run the Audit.
Quick recap:
AI now finds bugs your scanners structurally can’t see — confirmed by Google last week, not predicted
Your CAIO’s clock and the attacker’s clock are not running at the same speed
The single most useful thing you can do this week is run one workflow through the Adversarial Audit prompt above
Most readers will bookmark this and close the tab. The operators who actually run the Audit on one critical workflow this week will see something they wish they’d seen six months ago. That’s the gap.
— R. Lauritsen
If you have the headspace this week:
Read the deep dive: Why the CAIO can’t save you →
AI Alone Can’t Run Revenue
Finance doesn’t run on “mostly right.” It runs on math.
In The Architecture Behind AI-Native Revenue Automation, Tabs’s CTO breaks down why LLMs alone aren’t enough—and what it actually takes to build audit-ready, AI-driven contract-to-cash systems for modern B2B teams.

