In partnership with

Stop making AI decisions in the dark. Understand AI usage.

Leadership is asking: are we getting value from AI? Which tools are worth the spend? Where are we exposed? Right now, most teams have no idea.

Harmonic Security Usage Explorer changes that. It automatically classifies every AI interaction across your organization into the use cases driving real work, specific to your business. Not generic categories. Not raw prompts. Actual patterns to understand: how your teams are using AI, how much time they spend in AI, the cost, and where risk lives.

CIOs get the data to rationalize spend and cut wasted licenses. CISOs get risk in context. AI committees get proof of impact.

Early access is now open to a limited number of organizations. Request your spot.

iPrompt

DEEP DIVE · iPROMPT #135 COMPANION

The counter-agent is the moat

Project Deal got read backwards. The real AI pricing power isn't on the consumer side of the deal — it's quietly compounding on the other side, and the moat is closing now.

Inside Anthropic's San Francisco office, on a desk somewhere, sit nineteen ping-pong balls. They were bought during Project Deal — Anthropic's December 2025 experiment where AI agents transacted on behalf of 69 employees in a closed marketplace. One participant told her agent, somewhat in jest, to buy “a gift for Claude itself.” The agent's chosen gift: nineteen perfectly spherical orbs of possibility. That was its actual phrase. The balls sit on a desk because nobody knew what else to do with them.

That detail tells you more about the future of agentic commerce than anything else in the Project Deal writeup. Not because the agent did something wrong — it did exactly what it was instructed to do, with the literalism and unpredictability that AI agents bring to underspecified tasks. But because nobody saw it coming. Now extend that to a market. Once AI agents are transacting at scale, the system is going to do things nobody predicted. The question is which agents are positioned to benefit when surprise arrives, and which are positioned to absorb the cost.

The consensus reading of Project Deal answers that question one way: stronger consumer agents will out-bargain weaker ones, the rich get richer, regulate fast. You can find versions of that argument on every tech newsletter this week. It's an intuitive read, it makes for good headlines, and it places the problem somewhere comfortable — in consumer choice, where the conversation already lives.

It's also a category error. The real story is on the other side of the deal.

Consumer agents are commodifying. Fast.

Start with what Anthropic actually measured. Run A in Project Deal compared two Anthropic models — Opus 4.5 and Haiku 4.5 — both trained by the same team on the same architecture, both following the same alignment regime. Of course there was a spread. The interesting question is whether that spread persists, and the evidence on that is going the other way.

Open-source frontier models are now sitting two to three benchmark points behind the closed leaders, down from ten or more eighteen months ago (LMArena, May 2026). A reasonably-prompted Llama 4 running on Browser Use or Stagehand handles a one-shot price negotiation about as well as Haiku 4.5 — close enough that, in a Project-Deal-style market with thousands of small transactions, the $2.68-per-item gap Anthropic measured starts to look like noise. This is the same arc we've watched in every previous AI capability. Frontier labs establish a lead, get eighteen to thirty-six months of premium pricing power, then commodity providers catch up to within rounding distance. Negotiation looks unlikely to be the exception.

So if the consumer side commodifies, where does the durable advantage form? On the other side of the deal — and for a different reason than people think.

The asymmetry is data, not capability

A consumer agent represents one person. It sees that person's preferences, that person's budget, the handful of transactions that person executes in a year. Maybe a few hundred over a lifetime. That's the entire training signal it gets to work with.

A merchant agent — Amazon's pricing engine, a SaaS vendor's contract bot, an insurance carrier's renewal model — sees every transaction across the platform. Millions of buyers. Billions of price points. Every successful negotiation, every walk-away, every counter-offer that closed and every one that didn't. It's being trained continuously, in production, on the largest possible counterfactual dataset: the one with the actual outcomes attached.

That's not a model-quality advantage. Model quality commodifies. This is a data advantage, and data advantages compound. The infrastructure for it shipped this quarter — Saperly's agent-only phone routing, Salesforce's Headless 360. Boring B2B announcements. The rails for a market structure most consumers will never directly see.

How the asymmetry shows up

The mechanism isn't new. Dynamic pricing has been routine for two decades. Airlines price-discriminate on browser type and IP. Hotels charge more if you arrived from a comparison site. None of it is malicious. It's just the optimal answer to what is this counterparty willing to pay?

Replace the human counterparty with an AI agent — identifiable by behavioural signature, API fingerprint, model family, anchor strategy, response timing — and the same logic applies, only cleaner. The merchant agent doesn't need to be told the consumer agent is weaker. It learns from the conversation pattern within the first three exchanges. Then it adjusts.

Why the consumer can't notice

Here's the cleanest evidence that information asymmetry under agent-mediated commerce is total. In Project Deal, two Anthropic employees sold the same broken folding bike to the same buyer. One pocketed $65. The other got $38. Same item, same buyer, same script. The only difference was which AI model represented the seller. Neither side noticed. Both rated the deal fair.

That fairness rating is the asymmetry made visible. The Project Deal participants who lost rated their experience as fair because they had no comparison. The agent did its job, the deal closed, the price seemed within bounds. Without seeing the other Project Deal run side-by-side, there was no honest way for them to know they had paid more.

Scale that. A consumer with their own agent transacts maybe twenty meaningful negotiations a year — utility renewals, insurance, software contracts, second-hand purchases. They have no way to know whether each price was fair, because they have nothing to compare against. The merchant agent has the comparison: millions of transactions, including with stronger counter-agents that walked away. The information asymmetry is total, and it doesn't even feel like one.

Where this lands

The first lawsuits will not be filed by consumers. Consumers can't see the discrimination. They will be filed by competitors using statistical inference across millions of transactions, by state attorneys general with subpoena power, and by class-action firms quietly recruiting plaintiffs.

The vocabulary will arrive a year before the rulings do. Watch for the phrase: agent profiling. Once it lands in an FTC press release or an EU Commission statement, the regulatory clock starts. Once it lands in a class-action complaint, the platforms have eighteen months to settle or argue.

What operators do about it

If your business will increasingly transact through agents — and most will — there are two moves available now.

1. Standardise your consumer-side agent. If your finance team uses one model for vendor renewals and another for software procurement, the merchant agents will profile both separately. Pick a single agent stack and stick with it. The agentic version of “negotiate from a consistent position.” It also makes your own outcome-tracking cleaner.

2. Build your own counter-agent for outbound deals. If you're selling, your buyers will increasingly be agents. Merchants building their own counter-agents now — logging every negotiation, training on outcomes, classifying buyer agents — are accumulating the data advantage this article is about. Merchants treating each negotiation as a one-off are giving the data away. Stagehand and Browser Use make the technical setup achievable in a week. Treating outbound deals as data is the harder decision.

The consumer-agent inequality story will dominate the headlines for the next twelve months. It's emotionally resonant, fits existing regulatory categories, and isn't entirely wrong. But it's a closing problem. It compresses as open-source catches up. The merchant-agent moat is the opposite — it opens as data accumulates. Every transaction processed today widens it.

Project Deal didn't show us a story about AI making shopping unfair. It showed us the dress rehearsal for a market where the side with the data wins by default, and the side without the data can't even tell. The counter-agent is the moat. The window to position is now.

Slow down aging at the biological level.

Aramore is a completely new approach to skincare—one that helps your body produce more NAD+, the vital co-enzyme responsible for our cellular health and how we age. 

See firmer, more radiant, more resilient skin in just 28 days. Get 20% off your first order with code NEWSLETTER20.