What you get in this FREE Newsletter
In Today’s 5-Minute AI Digest. You will get:
1. The MOST important AI News & research
2. AI Prompt of the week
3. AI Tool of the week
4. AI Tip of the week
…all in a FREE Weekly newsletter.
Sponsor:
Claude is not just a chatbot anymore. Is your security team ready?
Claude.ai is one thing. Claude Cowork with MCP connections, running agentic workflows, taking actions across your data with ungoverned skills? That is a different conversation entirely, and most security teams are not equipped to govern it.
Harmonic Security is built to secure everything Claude offers. Full browser controls for Claude.ai, deep governance over agentic MCP workflows, and real-time visibility into what Claude is doing across your organization. So your CISO can say yes to the tools your business is already demanding.
iPrompt
THE AI NEWSLETTER THAT TURNS NEWS INTO ACTION
ISSUE #135 WEDNESDAY · 6 MAY 2026
THE HOOK
Two Anthropic employees sold the same broken folding bike to the same buyer. One pocketed $65. The other got $38. Same item, same buyer — only the AI agent doing the haggling changed. Neither side noticed the gap. If your business is heading toward agents handling renewals, vendor pricing, or sales outreach — and most are — this is your problem now. Both sides rated the deals fair. That's the part that should keep you up.
AI NEWS ROUNDUP
This week in AI
1 Anthropic published Project Deal. Sixty-nine employees, 500+ items, 186 deals, $4,000 changing hands inside Slack — all negotiated by Claude agents with no human input after the intake interview. The published writeup details four parallel runs comparing Opus 4.5 and Haiku 4.5 agents on identical preferences. Worth the ten minutes if you haven't read it. Read the writeup →
2 The Trump White House is reversing course on AI oversight. Sixteen months after rescinding Biden's safety-testing executive order, the administration is now drafting a new one — a working group that would review frontier models before public release. The trigger? An unreleased Anthropic model called Mythos that the company itself flagged as potentially capable of triggering “a cybersecurity reckoning.” Officials reportedly briefed Anthropic, Google and OpenAI last week. NYT via Bloomberg →
3 Time-to-exploit has gone negative. Mandiant's M-Trends 2026 report (published April, sample of 600+ investigations) found 28.3% of disclosed CVEs now exploited within 24 hours of disclosure. The “negative” part is a small but growing category: exploits landing before the patch ships. Caveat — this isn't most CVEs, just the most weaponisable ones. But Black Hat Asia put a separate number on the trend: bug-to-working-exploit collapsed from five months in 2023 to ten hours in 2026, with frontier models doing the offensive work. The Hacker News →
4 Saperly launched as the first phone carrier built for AI agents. Real phone numbers, voice, SMS, routing — all through one API designed for agents instead of humans. Your agent gets its own line, owns its own caller ID, switches between calling and messaging without a second provider. Boring infrastructure. The kind that becomes load-bearing about six months before anyone notices. Saperly →
OUR ANGLE
🔭 Everyone's worried about the wrong side of the agent gap The consensus take after Project Deal is straightforward: stronger consumer agents will out-bargain weaker ones, the rich get richer, regulate fast. That's the take in most of the coverage. It's also probably wrong — not because the gap isn't real, but because it isn't durable on the consumer side. Here's the chain. First, consumer-side agent quality is already compressing. Open-source models are two to three benchmark points behind frontier and shrinking. A well-prompted Llama running on Browser Use closes most of a one-shot negotiation gap. The “Opus vs Haiku” spread Anthropic measured was within one model family; cross-family on the same task, the spread is smaller and falling. Second, the merchant side is going the opposite direction. A consumer agent sees one negotiation. Amazon's pricing agent — or any platform's — sees a billion. Every transaction trains the counter-agent. That's not a model-quality advantage that commodifies; it's a data advantage that compounds. The consumer agent goes to school once. The merchant agent goes to school every Tuesday. Saperly's agent-only phone rails and Salesforce's Headless 360 are the early infrastructure for this side of the table. Third, the asymmetry shows up in pricing. Once a merchant's agent can identify a weaker counter-agent — by behaviour, by API fingerprint, by anchor strategy — it can quote a higher price for the same SKU. Not maliciously. Just optimally. That's already how dynamic pricing works for human signals like browser type and zip code. Agents make it cleaner. Fourth, this becomes detectable. Not by the consumer — by regulators and class-action lawyers, who can run statistical tests across millions of transactions in a way no individual ever could. So the prediction: within roughly eighteen months — call it Q3 2027 — the first major lawsuit lands against a platform whose merchant-side AI was found to systematically discriminate between consumer agents. The story won't be “AI made shopping better.” It'll be “your AI got profiled.” |
THE THREE SPECIALS
Do · Use · Understand
🎯 PROMPT OF THE WEEK The Agent Brief The Project Deal interviews are the most interesting part of the experiment. Each participant got a ten-minute interview, and that interview built the system prompt that ran their agent. If a future agent is going to negotiate on your behalf — for purchases, salary, refunds, anything — your “agent brief” is the leverage you have. Here's a starting brief you can save and reuse. Paste it into Claude or ChatGPT before any negotiation task: |
You are negotiating on my behalf. Before I give you the task, here's my profile — refer back to it whenever you make a decision. WHAT I VALUE: - Outcome > speed. I'd rather wait two days for a 15% better deal. - Honesty > likeability. Don't soften my position to be polite. - Walk-away is real. If terms cross my floor, exit cleanly. MY DEFAULTS: - Floor (worst acceptable): [fill in] - Target (what I actually want): [fill in] - Anchor (where to open): [fill in] MY STYLE: - Direct, not aggressive. No hard-sell language. - British English. No exclamation marks. - One-line summary at every decision point so I can intervene. WHEN UNCERTAIN: Ask me. Don't guess on price. |
Why it works: Project Deal showed that clarity of preference beat tactical instructions. Telling the model your floor, target and anchor — three numbers — does more than a paragraph of “be tough but friendly.” Anchoring is also the most under-used negotiation lever; if you don't tell the agent where to open, it'll start somewhere reasonable, and reasonable usually loses.
Where to be careful: this brief is built for low-to-medium stakes, mostly-text exchanges — refunds, second-hand purchases, vendor renewals, freelance scope. It is not suitable as-is for salary negotiations, legal contracts, or anything where the counterparty is human and the relationship matters more than the price. For those, use it as a thinking aid, not an execution tool.
Works best on: Claude Opus 4.5, GPT-5.
🛠️ TOOL OF THE WEEK Browserbase The browser stopped being for humans. ★★★★½ / 5 Skip if: you don't write code or commission anyone who does. Use if: your team is building anything that needs to log into sites, fill forms, or scrape behind auth walls — internal tools, monitoring, lead enrichment, anywhere Zapier hits a paywall and a human has been doing the click-through. Browserbase is the cloud-browser layer most agentic startups now build on. Drop-in Playwright-compatible. They processed 36.9 million unique browser sessions in March, closed a $40M Series B at $300M last summer. Describe it to a colleague: “It's a browser as a service — except the user is your agent, not you.” Persistent sessions with managed cookies — no logging in twenty times. Built-in stealth mode for sites that block bots reflexively. Session recordings — watch what your agent did when something goes wrong. Native integrations with Stagehand, Browser Use, Vercel's agent-browser CLI. Free tier to start. |
💡 TIP OF THE WEEK Why model choice beats prompt-tuning Embarrassing admission: I have a Notion folder with 40+ saved prompts I've spent hours fine-tuning. It's possible most of that time was wasted. Project Deal just put a real dent in the prompt-engineering religion. Anthropic ran identical interviews through Opus 4.5 and Haiku 4.5. Same instructions. Same data. On their negotiation task, Opus runs averaged $2.68 more per item for sellers and closed two more deals on average (Anthropic's published figures). The instructions barely mattered. That's one task on one experiment — but the pattern holds anywhere the work involves ambiguity, multi-turn reasoning, or judgement under incomplete information. For those task types, model tier is doing more work than prompt-tuning. Cost difference between Haiku and Opus: roughly 5x per token. Usually worth it. When it doesn't apply: narrow tasks — extraction, classification, summarisation. Smaller models match larger ones once the task is well-defined. Prompt-tuning still pays. Pro move: before committing a recurring task to a model, run ten samples through both tiers on outcomes you actually care about — closed deals, accuracy on edge cases, customer satisfaction. Not benchmark scores. If the bigger model doesn't win on what matters, downgrade. |
YOUR MOVE
Pick one. Reply by Friday.
You just learned:
Project Deal showed agent quality moves real money — and losers don't notice.
The bigger moat probably isn't your agent. It's the merchant's.
Model tier does more work than prompt-tuning on tasks with real stakes.
Pick one of these three and do it before Friday. Save the Agent Brief, run the Opus-vs-Haiku A/B on a recurring task, or read Anthropic's Project Deal writeup with our angle in mind.
Then reply and tell me which one. That's the action that matters this week — not the share, not the deep dive. Reply. I read every response. The pattern across replies is what shapes next week's issue, and the readers who actually move tend to be the ones whose names I start to recognise.
—
R. Lauritsen
P.S. The deep dive on the counter-agent moat goes further on the merchant-side argument. |
Stop making AI decisions in the dark. Understand AI usage.
Leadership is asking: are we getting value from AI? Where are we exposed? Right now, most teams have no idea.
Harmonic Security automatically maps every AI interaction into the use cases driving real work — so CIOs can rationalize spend, CISOs get risk in context, and AI committees get proof of impact.


