
Welcome to the iPrompt Newsletter
A malicious email. A hijacked AI agent. A resignation letter sent to someone's CEO—without their knowledge.
That's the attack OpenAI demonstrated Monday when explaining why prompt injection "may never be fully solved." Their defense? An AI trained to break their own AI before hackers can.
Meanwhile, Google dropped $4.75B on an energy company (not an AI company), ByteDance committed $23B to infrastructure despite U.S. chip bans, and 55,000 workers lost jobs this year with AI explicitly blamed.
The uncomfortable truth nobody's saying: the AI race is no longer about who has the best model. It's about who can power it, secure it, and afford the humans it replaces.

What you get in this FREE Newsletter
In Today’s 5-Minute AI Digest. You will get:
1. The MOST important AI News & research
2. AI Prompt of the week
3. AI Tool of the week
4. AI Tip of the week
…all in a FREE Weekly newsletter.
Clear communicators aren't lucky. They have a system.
Here's an uncomfortable truth: your readers give you about 26 seconds.
Smart Brevity is the methodology born in the Axios newsroom — rooted in deep respect for people's time and attention. It works just as well for internal comms, executive updates, and change management as it does for news.
We've bundled six free resources — checklists, workbooks, and more — so you can start applying it immediately.
The goal isn't shorter. It's clearer. And clearer gets results.

OpenAI Admits Prompt Injection Is Unsolvable—Deploys AI to Attack Itself
OpenAI published a security deep-dive on December 22 admitting that prompt injection—where hidden instructions hijack AI agents—is "unlikely to ever be fully solved." Their demo showed a planted email tricking ChatGPT Atlas into sending a resignation letter instead of an out-of-office reply. The fix: an "LLM-based automated attacker" trained via reinforcement learning to find exploits before real hackers do.
Why it matters: This isn't a bug that gets patched. It's an architectural reality. If you're using AI agents with access to your email or payments, you're accepting risk that can be reduced but not eliminated.
[Read the full story]

Google Buys Energy Company for $4.75B to Power AI
Alphabet announced Monday it's acquiring Intersect Power—a clean energy developer—for $4.75B plus debt. Not an AI startup. An energy company. CEO Sundar Pichai framed it as building "power generation in lockstep with data center load." The deal bypasses U.S. grid bottlenecks that are slowing AI expansion.
Why it matters: When your AI company spends billions on electricity instead of research, you're watching AI become an infrastructure business. The model is the easy part now.

ByteDance Plans $23B AI Spend—Despite Chip Restrictions
TikTok's parent company budgeted 160 billion yuan ($23B) for 2026 AI infrastructure, per the Financial Times. Half goes to chips—even with U.S. export controls blocking Nvidia's best hardware. ByteDance is also leasing overseas data centers where it can legally deploy restricted GPUs.
Why it matters: Export controls didn't stop the spending—they just moved it. China's AI leaders are building around restrictions, not backing down. This is a chip cold war with no end in sight.

AI Blamed for 55,000 Job Cuts in 2025—Companies Now Say It Out Loud
For the first time, companies are publicly citing AI as a reason for mass layoffs. Amazon cut 14,000 (largest in company history). Microsoft eliminated 15,000. Salesforce dropped 4,000 support roles—CEO Marc Benioff said AI handles 30-50% of work. Total U.S. layoffs in 2025: 1.17 million, highest since COVID.
Why it matters: 2025 is the year "AI might take your job" became "AI took 55,000 jobs and companies are bragging about it." The warning period is over.

Our Angle: The "AI Winners" Are Actually Losing
Here's the uncomfortable take nobody wants to say: the companies "winning" the AI race are hemorrhaging money to do it. Google didn't buy an AI startup this week—it bought a power company because AI is now an electricity arbitrage game.
ByteDance is spending $23B not because they're confident, but because falling behind is existential. OpenAI just admitted their flagship product has a permanent security flaw.
And the companies cutting 55,000 jobs? They're not saving that money—they're redirecting it to infrastructure that won't pay off for years. Here's what most coverage misses: AI is entering its railroad era—massive capital requirements, uncertain returns, and the companies that "win" might just be the ones who lose money the slowest.
The real winners might be the businesses that let others build the infrastructure, then rent it cheaply in three years when the dust settles. If your strategy is "buy every AI tool and deploy agents everywhere," you're betting on the railroad builders. If your strategy is "wait for prices to crash and cherry-pick what works," you might be betting on the merchants who used the railroad
But what can you actually DO about the proclaimed ‘AI bubble’? Billionaires know an alternative…
Sure, if you held your stocks since the dotcom bubble, you would’ve been up—eventually. But three years after the dot-com bust the S&P 500 was still far down from its peak. So, how else can you invest when almost every market is tied to stocks?
Lo and behold, billionaires have an alternative way to diversify: allocate to a physical asset class that outpaced the S&P by 15% from 1995 to 2025, with almost no correlation to equities. It’s part of a massive global market, long leveraged by the ultra-wealthy (Bezos, Gates, Rockefellers etc).
Contemporary and post-war art.
Masterworks lets you invest in multimillion-dollar artworks featuring legends like Banksy, Basquiat, and Picasso—without needing millions. Over 70,000 members have together invested more than $1.2 billion across over 500 artworks. So far, 25 sales have delivered net annualized returns like 14.6%, 17.6%, and 17.8%.*
Want access?
Investing involves risk. Past performance not indicative of future returns. Reg A disclosures at masterworks.com/cd
AI Prompt of the Week
The "Steelman Then Destroy" Debate
What it does: Forces AI to argue BOTH sides of a decision at full strength before revealing which side wins—eliminating the confirmation bias that plagues most AI-assisted decisions.
The prompt:
"I'm deciding whether to [decision]. First, spend 200 words making the STRONGEST possible case FOR this decision—assume I'm smart and have good reasons. Then spend 200 words making the STRONGEST possible case AGAINST—assume there are real risks I'm not seeing. Finally, tell me which argument is actually stronger AND what piece of information would flip your answer."
Why it works: Most prompts ask AI to "help me decide" which triggers agreement bias. This structure forces genuine adversarial analysis because you're explicitly asking for both sides to be argued well BEFORE the verdict.
Real-world application: A product manager used this to evaluate killing a feature. The AI's "against" argument surfaced a retention risk that changed the decision. The "what would flip it" identified the exact metric threshold they needed to monitor.
AI Tool of the Week
Google Workspace Studio
What it is: Google's new no-code platform (launched Dec 3) for building AI agents that automate tasks across Gmail, Drive, Docs, Sheets, and Chat—using plain English.
Why you need it: This isn't another chatbot. It's IFTTT with Gemini 3's reasoning built in. You describe what you want ("flag urgent emails and ping me in Chat"), and it builds a working agent. Early testers ran 20 million tasks in the alpha alone.
One-liner pitch: "It's Zapier if Zapier could actually read your emails and understand context."
Rating: ⭐⭐⭐⭐ (4/5) — Powerful but brand new; expect rough edges.
Key features: Plain-language agent creation • Deep integration with Workspace apps • Connects to Salesforce, Jira, Asana, Mailchimp • Shareable agents across teams • Templates for common workflows.
Best use case: Automating the stuff you keep saying you'll automate—meeting summaries, status reports, email triage—without needing IT or Zapier expertise.
Caveat: Given what we just learned about prompt injection, be thoughtful about what permissions you grant. Start with low-stakes automations.
Link: studio.workspace.google.com — Included with Business/Enterprise Workspace plans.
AI Tip of the Week
Treat AI Agents Like Interns With Access to Your Email
The tip: Before deploying any AI agent, ask: "Would I give a first-week intern this level of access?" If not, the agent shouldn't have it either.
Why it works: OpenAI's prompt injection demo showed an agent being tricked by a malicious email. The problem wasn't the AI's intelligence—it was the AI's access. An agent that can read your inbox AND send emails AND access payments is three vulnerabilities, not one. Interns start with view-only access for a reason.
Limitations: Tight permissions mean more manual steps. You'll re-authorize more. But this is the trade-off until multi-model safety systems mature—and even then, OpenAI says the risk doesn't fully go away.
Pro move: Create a dedicated email address and calendar for AI agents. Route non-sensitive tasks there. Keep your primary accounts human-only. It's not paranoid—it's the same logic behind why you don't give contractors your personal login.
Your Move
You just learned:
• Prompt injection is permanent—OpenAI now uses AI attackers to find flaws because the problem can't be fully solved
• The AI race is now an infrastructure war—the winners are spending billions on power and chips, not models
• 55,000 jobs were cut with AI explicitly blamed—this is no longer a future threat, it's a documented present
Now implement one.
Most readers will nod at the infrastructure thesis and do nothing. The ones who audit their AI agent permissions this week, or run the Steelman-Destroy prompt on their next big decision, will be the ones who aren't surprised when the next breach or pivot happens.
— R. Lauritsen
P.S. Forward this to someone giving AI agents full access to their inbox. Before their AI resigns for them.
P.P.S. Next week: 2025 AI Year in Review—what actually mattered, what was hype, and the five bets worth making in 2026.
Stay curious—and stay paranoid.


