Attention spans are shrinking. Get proven tips on how to adapt:

Mobile attention is collapsing.

In 2018, mobile ads held attention for 3.4 seconds on average.
Today, it’s just 2.2 seconds.

That’s a 35% drop in only 7 years. And a massive challenge for marketers.

The State of Advertising 2025 shows what’s happening and how to adapt.

Get science-backed insights from a year of neuroscience research and top industry trends from 300+ marketing leaders. For free.

How $6 Million Beat $6 Billion

The three plays that won 2025—and your blueprint for what's next

By R. Lauritsen • December 30, 2025 • 8 min read

TL;DR

  DeepSeek trained a GPT-4-class model for $6M vs $100M+. Open-sourced it. Efficiency beat brute force.

  Anthropic paid $1.5B for pirated books—but legal acquisition + training = fair use. The rules are now clear.

  Cursor hit $29.3B in 2 years building for agents, not autocomplete. Practical tooling > model hype.

  The pattern: leverage beats scale. Better algorithms, legal clarity, useful tools—not bigger budgets.

  Your move: Audit model costs, lock down your data chain, adopt agentic tooling—before competitors do.

The Story Everyone Got Wrong

The 2025 AI narrative was supposed to be about scale. Trillion-dollar infrastructure bets. Nvidia's unstoppable rise. The biggest models from the richest labs winning by default.

That's not what happened.

A Chinese hedge fund's AI lab trained a frontier model for 1/16th the cost of GPT-4—then gave it away. A copyright ruling established that legal training is fair use—while piracy costs billions. A two-year-old code editor hit a $29B valuation by ignoring chatbot hype and building what developers actually need.

Three stories. One pattern: leverage beat scale.

"2025 proved that the best-funded lab doesn't win. The best-architected one does."

Here's exactly how each play worked—and what it means for your 2026 strategy.

Play #1: DeepSeek and the Efficiency Shock

What Happened

In January 2025, DeepSeek—a Hangzhou lab funded by quant hedge fund High-Flyer—dropped DeepSeek-R1. It matched or beat GPT-4o and OpenAI's o1 on major benchmarks. Then they open-sourced it under MIT license.

The number that broke brains: $6 million in training costs for V3. OpenAI reportedly spent $100+ million on GPT-4. DeepSeek used ~1/10th the compute of Meta's Llama 3.1. Nvidia stock dropped. The "compute moat" thesis cracked overnight.

How They Did It

Mixture of Experts: Routes inputs to specialized sub-networks instead of running full model. Less compute per query.

RL from scratch: R1-Zero learned reasoning through pure reinforcement learning. Chain-of-thought emerged organically.

Distillation: Six smaller versions run on laptops, retaining most capability.

💡 Plain English: They made the model smarter about when to think hard—not just bigger. That's the breakthrough.

"DeepSeek aimed for accurate answers rather than detailing every logical step, significantly reducing computing time while maintaining effectiveness."

— Dimitris Papailiopoulos, Microsoft AI Frontiers

⚡ Key Takeaway: Stop defaulting to the priciest model. Benchmark DeepSeek against your use cases. You might be overpaying 10x.

Play #2: The $1.5 Billion Clarity

What Happened

August 2025: Anthropic settled Bartz v. Anthropic for $1.5 billion—largest copyright settlement in history. Authors alleged Anthropic downloaded 7+ million books from pirate sites to train Claude.

But the headline obscures the real story. Judge Alsup's split ruling:

Piracy ≠ fair use. Downloading from illegal sites is infringement. Period.

✓ Legal acquisition + training = fair use. Training on lawfully purchased content is protected. Anthropic won this outright.

"The resulting model does not replicate or supplant the works. It turns a hard corner and creates something different... like any reader aspiring to be a writer."

— Judge William Alsup, U.S. District Court

The Legal Playbook

✓ LEGAL (Fair Use)

✗ ILLEGAL

Buy books → scan → train

Download from pirate sites

Licensed datasets

Shadow library torrents

Public web scraping

Bypassing access controls

Transformative use (training)

Infringing outputs

⚠️ Caveat: This covers past training inputs only. Claims about infringing outputs are still coming.

⚡ Key Takeaway: Audit your data supply chain. Ask vendors: "Where did your training data come from?" Clean provenance is now a requirement.

Play #3: Cursor and the Tooling Landgrab

What Happened

While everyone debated AGI, Cursor became the most valuable AI tool developers actually use. $1B ARR. $29.3B valuation. Two years old. That's Snowflake-at-IPO territory—for a code editor.

What They Got Right

Agents > autocomplete. Built for AI that edits multiple files, runs tests, iterates autonomously.

Project-level context. Indexes entire codebase. Refactoring across hundreds of files just works.

Model-agnostic. Claude, GPT, Gemini, DeepSeek. No lock-in.

Speed obsession. Composer runs 4x faster than comparable LLMs. Latency kills adoption.

"Night and day. Adoption went from single digits to over 80%. All the best builders were using Cursor."

— Stripe engineering team

The Landscape

Tool

Best At

Weakness

Price

Cursor

Multi-file agents, speed

VS Code only

$20/mo

Copilot

GitHub integration

Agent mode lagging

$10-39/mo

Claude Code

200K context, big refactors

Terminal-only

Usage-based

⚡ Key Takeaway: The gap between agentic editors and basic autocomplete is a force multiplier. Evaluate your team's tooling now.

The Pattern: Leverage Over Scale

Play

Leverage Point

What It Beat

DeepSeek

Better algorithms

Brute-force compute

Anthropic case

Legal clarity

Move-fast-break-things

Cursor

Practical tooling

Model hype

The winners didn't outspend. They out-leveraged. That's the 2026 playbook.

Your 2026 Playbook

1. Benchmark Before You Buy

Test DeepSeek R1 against your actual use cases this month. Track cost-per-quality. Decision by end of Q1.

2. Audit Your Data Chain

Building models? Document provenance. Buying AI? Add "Where did training data come from?" to procurement.

3. Run a 30-Day Tooling Pilot

Pick your highest-velocity team. Give them Cursor. Measure PRs shipped. Make the org-wide call based on data.

4. Hunt Leverage, Not Scale

Every AI investment: "Where's the asymmetric advantage?" Best opportunities do more with less.

5. Buy the Dip

The 95% seeing no ROI will cut budgets. Be ready to acquire—talent, tools, strategic ground—when others retreat.

Go Deeper

→ DeepSeek technical analysis — MIT Technology Review

→ Full settlement termsanthropiccopyrightsettlement.com

→ Cursor 2.0 docscursor.com

→ Enterprise AI ROI study — MIT Sloan

Which play are you implementing first? Reply and tell me—I read every response.

This deep dive accompanies the iPrompt Newsletter: 2025 Year in Review.

Subscribe for free for weekly analysis that turns AI news into action.

— R. Lauritsen

Stay curious—and stay paranoid.

Recommended for you

No posts found