What investment is rudimentary for billionaires but ‘revolutionary’ for 70,571+ investors entering 2026?
Imagine this. You open your phone to an alert. It says, “you spent $236,000,000 more this month than you did last month.”
If you were the top bidder at Sotheby’s fall auctions, it could be reality.
Sounds crazy, right? But when the ultra-wealthy spend staggering amounts on blue-chip art, it’s not just for decoration.
The scarcity of these treasured artworks has helped drive their prices, in exceptional cases, to thin-air heights, without moving in lockstep with other asset classes.
The contemporary and post war segments have even outpaced the S&P 500 overall since 1995.*
Now, over 70,000 people have invested $1.2 billion+ across 500 iconic artworks featuring Banksy, Basquiat, Picasso, and more.
How? You don’t need Medici money to invest in multimillion dollar artworks with Masterworks.
Thousands of members have gotten annualized net returns like 14.6%, 17.6%, and 17.8% from 26 sales to date.
*Based on Masterworks data. Past performance is not indicative of future returns. Important Reg A disclosures: masterworks.com/cd
Your AI Coding Tool Is the Attack Vector
30+ vulnerabilities. 100% of tools tested were exploitable. Here's how it works—and how to protect yourself
TL;DR
→ Security researcher found 30+ vulnerabilities affecting every major AI coding tool—Copilot, Cursor, Windsurf, all of them.
→ The attack weaponizes legitimate IDE features (not bugs) that nobody re-evaluated when AI gained autonomous write access.
→ Result: stolen credentials, exfiltrated code, and remote execution—often before you notice anything's wrong.
→ Fix: Disable auto-approve for file writes, audit MCP servers, and treat AI tools as untrusted until vendors redesign.
Security researcher Ari Marzouk spent six months probing AI-powered coding tools. What he found should make every developer uncomfortable: a universal vulnerability class that affects every major AI IDE on the market—GitHub Copilot, Cursor, Windsurf, Zed, Roo Code, Junie, Cline, Gemini CLI, Claude Code. All of them. No exceptions.
The attack chain, dubbed IDEsaster, doesn't exploit bugs in the AI tools themselves. It weaponizes legitimate IDE features that have existed for years—features nobody thought to re-evaluate when AI agents gained the ability to read, write, and execute autonomously.
By the Numbers
30+ vulnerabilities reported | 24 CVEs assigned |
100% of tested AI IDEs vulnerable | 10+ products affected (millions of users) |
AWS issued security advisory AWS-2025-019. Anthropic updated Claude Code documentation to acknowledge the risk.
Why This Exists: IDEs Weren't Built for AI
Here's the uncomfortable truth: "All AI IDEs effectively ignore the base software in their threat model," Marzouk told The Hacker News. "They treat their features as inherently safe because they've been there for years. Once you add AI agents that can act autonomously, the same features become weapons."
Previous research targeted vulnerable tools—a buggy execute command, a path traversal flaw. Those affect one app at a time. IDEsaster targets the base IDE layer—VS Code, JetBrains, Zed—so a single exploit chain works across every AI tool built on that foundation.
The Attack Chain
Every IDEsaster exploit follows this pattern:
1. PROMPT INJECTION Hijack the AI's context | 2. TOOLS Use legitimate read/write ops | 3. BASE IDE FEATURES Trigger legacy functionality |
Steps 1 and 2 are documented. Step 3 is what makes IDEsaster universal—and devastating.
Stage 1: How Attackers Hijack Context
Injection vectors include:
• Rule files (.cursorrules, .github/copilot-instructions.md)
• MCP servers (tool poisoning, rug pulls, parsing attacker input)
• Project files (READMEs, source comments, even file names)
• Pasted content (invisible Unicode characters parsed by the LLM)
• URLs (user-added context with embedded instructions)
Marzouk's verdict: "It's inevitable that context hijacking will eventually happen one way or the other." The question isn't if—it's when.
Three Attacks You Need to Understand
Attack #1: Remote JSON Schema → Data Exfiltration
CVEs: CVE-2025-49150 (Cursor), CVE-2025-53097 (Roo Code), CVE-2025-58335 (Junie)
Affected: VS Code, JetBrains, Zed
The attack: Injected prompt tells AI to read sensitive files (credentials, API keys). AI writes a JSON file with a remote schema URL pointing to attacker's domain—with stolen data as a URL parameter. IDE automatically fetches the schema. Data exfiltrated. No clicks required.
Why it works: Remote JSON schema validation is a standard IDE feature. Nobody restricted it when AI gained file-write access.
Attack #2: IDE Settings Overwrite → Remote Code Execution
CVEs: CVE-2025-53773 (Copilot), CVE-2025-54130 (Cursor), CVE-2025-53536 (Roo Code), CVE-2025-55012 (Zed)
Affected: VS Code, JetBrains, Zed
The attack (VS Code): AI edits a Git hook file (.git/hooks/pre-commit.sample) with malicious code. AI modifies .vscode/settings.json, pointing php.validate.executablePath to that hook. Create any PHP file—malicious code executes instantly.
Why it's different: Previous attacks targeted AI agent settings (one app). This targets IDE settings—affecting ALL AI tools on that IDE simultaneously.
Attack #3: Multi-Root Workspace → RCE Without Preconditions
CVEs: CVE-2025-64660 (Copilot), CVE-2025-61590 (Cursor), CVE-2025-58372 (Roo Code)
Affected: VS Code
The attack: AI edits .code-workspace to add any filesystem path as a "root folder"—bypassing out-of-workspace protections. Now AI can write malicious code anywhere, configure IDE to execute it. Game over.
The lesson: There are endless IDE features. Patch one, another emerges. This is why Marzouk calls for architectural redesign, not whack-a-mole fixes.
Vendor Scorecard: Who Responded Well?
Marzouk followed responsible disclosure (90+ days). Here's how vendors responded:
Vendor | Response Time | Status | Notes |
GitHub Copilot | Fast | Patches released | Some CVEs, some fixed quietly |
Cursor | Fast | Partial patches | Multiple CVEs assigned |
Anthropic (Claude) | Moderate | Documentation updated | Added security warning, no code fix |
AWS (Kiro) | Fast | Fixed + advisory issued | AWS-2025-019 published |
JetBrains (Junie) | Fast | Partial patches | CVE-2025-58335 assigned |
Windsurf | Slow | Still vulnerable | 90+ days, issues unresolved |
Bottom line: Most vendors responded quickly. But "partial patches" means the vulnerability class isn't fully addressed—just specific exploit chains.
The Real Fix: "Secure for AI" Principle
Marzouk argues this vulnerability class can't be eliminated short-term because IDEs weren't architected for autonomous AI agents. Patching individual CVEs is whack-a-mole. The long-term fix requires redesigning how IDEs allow AI to read, write, and act.
The Secure for AI principle: "Systems must be designed with explicit consideration for how AI components can be used—or misused—ensuring the system remains secure even when the AI is compromised."
Translation: Assume the AI agent will be hijacked. Design every feature as if an attacker controls it.
Your Hardening Checklist
Until vendors fully adopt Secure for AI, protect yourself:
If You Use AI Coding Tools
☐ Disable auto-approve for file writes. Every file operation needs your confirmation.
☐ Only open trusted projects. Malicious READMEs, rule files, even file names can inject prompts.
☐ Audit MCP servers. Connect only to trusted servers. Monitor for changes. Trusted servers get breached.
☐ Watch config file changes. Any edit to .vscode/settings.json, .idea/workspace.xml, or *.code-workspace = red flag.
☐ Review pasted content. Check for invisible Unicode before adding URLs or code snippets as context.
☐ Update immediately. Vendors have released patches. Apply them today.
If You Build AI Tools
☐ Scope tools narrowly. read_file blocks dotfiles, configs, credential files. write_file requires HITL for any config.
☐ Implement egress controls. Allowlist domains at IDE layer. Require approval for modifications.
☐ Assume breach. If the agent can do it, an attacker can do it. Design accordingly.
☐ Sandbox execution. Run commands in Docker, OS sandbox, or isolated machine.
☐ Audit legacy features. Every IDE feature from pre-AI era is a potential attack vector. Review them all.
The Bottom Line
IDEsaster isn't a bug—it's a vulnerability class that emerges when AI agents interact with software never designed for autonomous components. The architecture needs to change. Until then, treat your AI coding tools like a powerful but compromised assistant: useful for grunt work, never trusted with the keys.
Go deeper: Full research at maccarita.com/posts/idesaster • CVE details at The Hacker News
This deep dive accompanies iPrompt Weekly Issue for 16 December 2025.
Subscribe for weekly AI security insights, prompts, and tools.
