Trade any market, anywhere, any time.
News doesn’t take weekends off, neither should your portfolio. What about when the S&P moves 3% during your commute?
Liquid allows anyone to trade stocks, commodities, FX, pre-IPO companies, crypto, and more with up to 100X leverage, 24/7, from your phone or computer.
Any Market - Trade everything from Gold to Bitcoin, OpenAI to Nvidia, SK Hynix to JPY.
Anywhere - Liquid is available on iOS, Android, and desktop platforms, letting you trade from the subway, your office, and maybe even the bathroom.
Any Time - Liquid markets are open 24/7; so when an unprecedented event happens on Saturday, you can stay on top of your portfolio.
Active traders qualify for rewards each month!
iPrompt Deep Dive
COMPANION TO ISSUE #131 • 9 APRIL 2026
Your AI Agents Are Ungoverned. The Regulators Are Coming.
What Microsoft’s Agent Governance Toolkit actually covers, what it doesn’t, and the five-step checklist your organisation should run before August 2, 2026.
TL;DR • 97% of enterprises expect a major AI agent security incident in 2026. Fewer than 34% have agent-specific controls in place. • Microsoft’s Agent Governance Toolkit (open-source, MIT) is the first to cover all 10 OWASP agentic AI risks with sub-millisecond policy enforcement. • The EU AI Act’s high-risk obligations take effect 2 August 2026. Colorado’s AI Act goes live June 2026. Penalties reach €35 million or 7% of global turnover. • Governance is no longer a compliance checkbox—it’s the infrastructure layer that determines which organisations can deploy agents at scale and which get grounded. • This article includes a five-step governance checklist with role-specific actions for technical leads, compliance teams, and executives. |
BY THE NUMBERS
Metric | Value |
Enterprises expecting major AI agent incident (2026) | 97% |
Enterprises with agent-specific security controls | 34% |
OWASP Agentic Top 10 risks covered by Microsoft’s toolkit | 10 / 10 |
Policy enforcement latency (p99) | < 0.1 ms |
EU AI Act max penalty (prohibited practices) | €35M or 7% turnover |
Colorado AI Act enforcement date | 1 June 2026 |
EU AI Act high-risk obligations date | 2 August 2026 |
Business apps using AI agents by end of 2026 (est.) | 40% |
Q1 2026 AI venture funding | $267.2 billion |
WHY THIS MATTERS NOW
AI agents aren’t coming. They’re here. They’re booking flights, executing trades, writing and deploying code, managing infrastructure, and sending emails on behalf of humans—often with less oversight than a junior employee would get on their first day. By end of 2026, an estimated 40% of business applications will include AI agents. That’s not a forecast from a slide deck at a conference. That’s the world we’re already building.
The problem isn’t capability. It’s accountability.
When a chatbot hallucinates, someone reads a wrong answer. Annoying, maybe embarrassing, but contained. When an agent hallucinates, it takes a wrong action—and that action might be irreversible. Deleting production backups because a cost-optimisation agent decided they were “unnecessary storage.” Sending confidential data to the wrong endpoint. Approving a transaction nobody authorised. The OWASP framework has a name for this: ASI10, Rogue Agents. It’s listed as one of the top ten risks for a reason.
And honestly? Most organisations aren’t ready. Not even close.
“Companies are already exposed to agentic AI attacks—often without realising that agents are running in their environments.” — Keren Katz, OWASP Agentic AI Top 10 Co-Lead, Senior Group Manager of AI Security at Tenable |
Three things converged this week that make governance the story, not just a supporting theme:
1. The Medvi cautionary tale. A two-person, AI-built company hit $401 million in revenue—then immediately faced FTC scrutiny, fake-doctor affiliate ads, and a potential HIPAA breach. The AI tools that compressed headcount also compressed the compliance function to zero. Moving fast and breaking things works until the thing you break is a regulation.
2. Microsoft’s toolkit drop. Microsoft doesn’t open-source seven packages of runtime governance infrastructure on a whim. This release signals that even the largest AI platform companies believe the current state of agent security is untenable—and that governance needs to be a shared standard, not a proprietary feature.
3. The regulatory clock. The EU AI Act’s high-risk obligations take legal effect on 2 August 2026—less than four months from now. Colorado’s AI Act lands even sooner, in June. Organisations that haven’t started classifying their AI systems and implementing governance frameworks are already behind.
WHAT MICROSOFT’S TOOLKIT ACTUALLY DOES
Let’s be concrete about what this is and isn’t. The Agent Governance Toolkit is a seven-package system released under MIT licence. It’s not a chatbot filter. It’s not a prompt guard. It governs agent actions—the tool calls, resource access, and inter-agent communication that happen after the model generates a response. Think of it as a firewall that sits between “what the model wants to do” and “what actually happens.”
The architecture borrows patterns from OS kernels and service meshes—solved problems in other domains, now translated to AI agents. Here’s what each package does:
Agent OS: A stateless policy engine that intercepts every agent action before execution. Supports YAML, OPA Rego, and Cedar policies. Sub-millisecond latency (p99 under 0.1ms)—roughly 10,000x faster than an LLM API call, so it adds essentially zero overhead.
Agent Mesh: Cryptographic identity for agents using DIDs with Ed25519 signing, secure inter-agent communication via IATP, and dynamic trust scoring on a 0–1000 scale. If an agent’s trust score drops, its permissions automatically narrow. This is the “zero-trust for AI” layer.
Agent Runtime + Agent SRE: Execution “rings” (like OS kernel privilege levels), kill switches, circuit breakers, and SLO tracking for multi-step workflows. When something goes sideways, the circuit breaker trips before the damage cascades.
Agent Compliance: Automated governance verification mapped to the EU AI Act, HIPAA, and SOC2, with evidence collection covering all 10 OWASP agentic risk categories. This is the bit your compliance team will actually care about.
Agent Marketplace: Plugin lifecycle management with cryptographic signing and trust-tiered capability gating—so your agent can’t install an unverified tool at runtime. Think npm, but with actual security.
THE FAILURE MODE NOBODY TALKS ABOUT
Here’s the scenario that keeps security teams up at night—and it’s not the dramatic one.
It’s not a rogue agent going haywire and deleting your database. That’s the movie version. The real failure mode is quieter: an agent that’s 95% right, running at scale, 24 hours a day. An automated procurement agent that approves invoices slightly above threshold because its cost-optimisation objective conflicts with your approval policy. A customer service agent that shares a discount code it was never authorised to generate. A code review agent that rubber-stamps a dependency with a known vulnerability because it parsed the changelog wrong.
None of these trigger an alarm. Each individual error is small. But multiply a small error by thousands of autonomous actions per day, and you’ve got a systemic problem that only surfaces in the quarterly audit—or worse, in the regulatory investigation.
This is why runtime governance matters more than pre-deployment testing. You can test an agent thoroughly in staging and still miss the failure modes that only emerge in production, under real data, at scale. The toolkit’s approach—intercepting every action in real time—is designed for exactly this class of problem.
WHAT IT DOESN’T DO
Important caveats. No single tool solves governance, and anyone who tells you otherwise is selling something.
• It’s application-layer, not OS-level. The policy engine and the agents run in the same Python process. This is the same trust boundary as LangChain or CrewAI. A truly malicious agent running in the same process could theoretically bypass governance. Defence in depth still applies.
• It doesn’t filter model outputs. This isn’t a content safety tool. It governs what agents can do, not what they say. For model-level safety—hallucination detection, toxicity filtering—you still need separate guardrails like Azure AI Content Safety.
• It’s in public preview. Production-quality code with 9,500+ tests and SLSA-compatible build provenance, but breaking changes are possible before GA. No independent third-party security audit has been published yet.
• It’s not a compliance certificate. Using the toolkit doesn’t make you EU AI Act compliant. It gives you tooling to generate compliance evidence. You still need the governance framework, the risk assessments, the human oversight processes, and the organisational accountability structures around it.
• It doesn’t solve the identity problem. Agent Mesh handles cryptographic identity between agents, but it doesn’t manage the underlying credentials those agents use to access your systems. API keys, OAuth tokens, service accounts—those still need traditional secrets management.
• It won’t help retroactively. If your agents are already running in production without governance, the toolkit can’t un-send the emails they’ve already sent or un-approve the transactions they’ve already approved. Start now.
BULL CASE / BEAR CASE
🟢 BULL Governance becomes a competitive moat. Organisations that implement agent guardrails early will be the only ones trusted to deploy agents at enterprise scale. Compliance readiness accelerates sales cycles. Microsoft’s open-source approach creates a shared standard. Framework-agnostic design means rapid adoption. First-mover governance tools become the default—like HTTPS became the default for web traffic. | 🔴 BEAR Governance becomes compliance theatre. Organisations adopt the toolkit superficially, generating audit evidence without changing agent behaviour. The OWASP checklist becomes a box-ticking exercise. Regulatory fragmentation slows everything. The EU AI Act, Colorado AI Act, and emerging US federal rules create conflicting requirements that paralyse deployment. The governance tax exceeds the productivity gain. |
TIME-STAMPED PREDICTIONS
• By June 2026: Colorado’s AI Act enforcement triggers the first US state-level penalty related to an AI agent’s autonomous action.
• By August 2026: At least three Fortune 500 companies publicly adopt Microsoft’s Agent Governance Toolkit or a comparable open-source framework as their standard agent security layer.
• By Q3 2026: The first major enforcement action triggered by an autonomous AI agent—not a chatbot hallucination, but an agent taking an unauthorised real-world action.
• By Q4 2026: “Agent governance” appears as a line item in enterprise software procurement checklists, alongside SOC2 and GDPR compliance.
THE FIVE-STEP AI AGENT GOVERNANCE CHECKLIST
Run this before August 2, 2026. Each step includes role-specific actions.
Step 1: Inventory your agents.
Map every AI agent in your environment. What tools does each one access? What credentials does it hold? What actions can it take without human approval?
• Technical lead: Build a registry of agents, their tool access, and permission scopes.
• Compliance: Cross-reference agent capabilities against EU AI Act Annex III high-risk categories.
• Executive: Confirm who in the organisation is accountable for each agent’s actions.
Step 2: Apply least agency.
Never give an agent more autonomy than the business problem requires. Restrict tools, credentials, and decision authority to the narrowest scope possible.
• Technical lead: Implement tool allowlists and denylists. Use the Agent OS policy engine to enforce at runtime.
• Compliance: Document the business justification for each agent’s permission scope.
• Executive: Require human-in-the-loop approval for any agent action above a defined impact threshold.
Step 3: Treat all agent inputs as untrusted.
Every document, email, API response, and RAG result an agent processes is a potential injection vector. Prompt injection is the #1 agentic risk for a reason.
• Technical lead: Implement input validation and prompt injection detection. The toolkit’s .NET SDK includes a built-in detector.
• Compliance: Establish a data classification policy for agent-accessible data stores.
• Executive: Mandate that agents interacting with customer data undergo the same security review as customer-facing applications.
Step 4: Implement kill switches and circuit breakers.
When an agent misbehaves, you need the ability to shut it down instantly—not after a review meeting. The toolkit’s Agent SRE package provides exactly this.
• Technical lead: Deploy kill switches with sub-second response. Set SLOs for agent reliability. Configure circuit breakers for cascading failure prevention.
• Compliance: Define incident response protocols with 72-hour notification windows aligned to EU AI Act requirements.
• Executive: Establish an agent incident response team with clear escalation paths.
Step 5: Generate compliance evidence continuously.
Compliance isn’t a one-time audit. The Agent Compliance package automates governance verification and maps evidence to regulatory frameworks. Use it.
• Technical lead: Enable OpenTelemetry metrics and tamper-proof audit logs for every agent action.
• Compliance: Map your agent inventory to EU AI Act risk categories and generate attestation reports quarterly.
• Executive: Include agent governance metrics in board-level risk reporting.
GO DEEPER
Start here — Microsoft Agent Governance Toolkit on GitHub — MIT-licensed, 9,500+ tests, quickstart guides for Python, TypeScript, .NET, Rust, and Go.
Understand the threat model — OWASP Top 10 for Agentic Applications 2026 — the peer-reviewed industry standard for agentic AI security risks.
Know the regulatory timeline — EU AI Act Official Overview — full timeline of phased enforcement through 2027.
See it in context — Microsoft Security Blog: OWASP Agentic Risks + Copilot Studio — practical mitigations mapped to each OWASP risk.
This deep dive accompanies iPrompt Issue #131. If someone forwarded this to you and you want the weekly newsletter, subscribe free at iprompt.email.
— R. Lauritsen
Stay curious—and stay paranoid.
A Senior Analyst Sees Half a Billion Dollar Potential.
Kingscrowd Capital's senior analyst reviewed RISE Robotics and projected potential growth to a $500 million valuation. The community round is open now on Wefunder. You don't have to be an institutional investor to get in at today's price.


