In partnership with

Welcome to the iPrompt Newsletter
Friday Edition

What you get in the Friday Edition

  1. Weekly Scoreboard

  2. Top Headlines of the Week

  3. Our Investing Angle

  4. Three Ideas to Research This Weekend

  5. AI Investment Framework

  6. Your Move


…all in a FREE Weekly newsletter.

Accio Work: Your Business, On Autopilot

Meet Accio Work, the agentic workspace designed to run your business operations end to end. From sourcing products and negotiating with suppliers to managing your store and launching marketing campaigns, Accio Work handles the execution so you don’t have to.

Powered by verified capabilities and deep integrations with business tools, it doesn’t just generate ideas, it takes action. Backed by Alibaba.com’s global supplier network and over 1B products, it seamlessly connects strategy to execution.

Stay in control while everything runs on autopilot.

iPrompt Signals

AI & robotics investing — explained so you can actually act on it.

ISSUE 08 // Friday, 1 May 2026 // 5–8 min read

THE HOOK

Four hyperscalers confirmed a combined $650–725 billion in 2026 AI capex this week — the largest concentrated infrastructure spend in tech history. Alphabet ripped 10%. Meta dropped 9%. And on Thursday, Nvidia — the company every one of them supposedly funnels that money toward — fell 4.6% and shed $230 billion in a single session. The capex headline got written everywhere. The part the calls actually re-priced — who builds the chips inside that capex — is where the next twelve months get decided.

WHAT TO DO WITH IT

Bull case: AVGO benefits as Meta MTIA, Google TPU, and Amazon Trainium scale through Broadcom’s silicon. Bear case: NVDA margins compress in 2027–2028 even as revenue grows. Named risk: Oracle, SMCI, ARM are the structural losers if the custom-silicon shift accelerates. Full thesis below.

WEEKLY SCOREBOARD

TICKER

PRICE (APR 30)

WEEK %

WHAT HAPPENED

NVDA

$199.57

-4.1%

Sold off Thursday on custom-silicon ramp

AVGO

$417.43

-1.3%

Hit ATH $422.76 mid-week before fading

MSFT

$407.78

-5.1%

AI run rate $37B; capex fears bite

GOOGL

$381.94

+9.9%

Cloud +63% YoY; capex raised to $190B

META

$611.91

-8.9%

Capex hiked to $145B; tax-inflated EPS

AMZN

$265.06

+2.0%

AWS +28%, fastest in 15 quarters

BOTZ

$37.96

+1.7%

Lifted by Alphabet halo; ISRG +7%

S&P 500

7,209

+1.0%

Best monthly run since 2020

VIX

16.89

-10.2%

Calmer than the NVDA selloff suggests

Bottom line: Hyperscaler demand is real and accelerating. The question the market is now asking — and didn’t ask twelve months ago — is whether Nvidia captures all of it, or whether AVGO, in-house chips, and AMD start eating margins from the edges.

TOP HEADLINES

1. Big Four hyperscalers raise 2026 AI capex to $650–725 billion.

Microsoft, Alphabet, Meta and Amazon all reported Q1 on April 28–29 and used the prints to lift their 2026 infrastructure budgets. Alphabet now $180–190B. Meta $125–145B. Amazon ~$200B. Microsoft tracking similar pace. The combined number is roughly the GDP of Argentina. What it means: the AI infrastructure cycle isn’t slowing — it’s structurally re-rating upward, and component pricing (GPUs, HBM memory, power) is part of why.

2. Alphabet rips 6% on Cloud beat, demand exceeds supply.

Cloud revenue $20.03B (+63% YoY) versus $18.05B expected. CEO Sundar Pichai told the call: “We are compute constrained in the near term. Our cloud revenue would have been higher if we were able to meet the demand.” That sentence — delivered casually on a hyperscaler earnings call — is the one investors actually paid for. What it means: enterprise AI demand at the platform layer is real revenue, not just promise.

3. Meta drops 8–10% despite a 33% revenue beat.

Capex guide raised by $10B on each end. Mark Zuckerberg said the increase reflects “higher component pricing” — code for: GPUs and HBM memory cost more than we modelled. EPS was inflated by an $8B one-time tax benefit. Operating margin held at 41%, the line bulls wanted defended.

🌱 NEW TO INVESTING? HERE’S WHAT THIS MEANS

When a company spends $145B on factories, servers and chips (capex), that money doesn’t show up in profit immediately — it gets depreciated over years. So the cost hits earnings later. Meta’s revenue grew 33%, but investors were doing the maths: more capex means heavier depreciation in 2027 and 2028, which compresses future profits. That’s why the stock dropped even though today’s numbers were great.

4. Nvidia falls 4.6% on Thursday, loses $230B in a day.

The selloff came after the hyperscaler earnings season — not before. Three reasons stacked: Amazon explicitly highlighted Trainium chip growth on its call, Alphabet announced it will sell its custom TPU chips to outside customers, and Reuters reported Nvidia’s restricted B300 servers now selling for ~$1M each on the China grey market. Bullish on long-run TAM, bearish on Nvidia’s cut of it. Investment implication: the bull case for AVGO just got stronger. Meta runs MTIA on Broadcom. Google sells TPUs designed with Broadcom. Custom silicon is no longer hypothetical.

5. OpenAI quietly reworks $500B Stargate.

The Financial Times confirmed this week what had been rumoured since early April: OpenAI has paused UK Stargate sites, abandoned the Abilene Texas expansion, and is moving from owning data centres to leasing compute. Oracle holds the structural bag — see the named-losers paragraph below.

OUR INVESTING ANGLE

Everyone’s watching the $700 billion capex number. The smarter bet is watching where inside that number the dollars are flowing.

The defining feature of this earnings week wasn’t that the hyperscalers raised capex. We knew that was coming. The defining feature was what they said about how they’re spending it.

Three signals stacked, and all three point in the same direction:

Headline 4 — Amazon called out Trainium growth on the same call that lifted AWS guidance. Alphabet announced it will sell custom TPU chips externally. Meta’s MTIA Gen 2, designed with Broadcom, is now in production.

Headline 1 — Meta cited “higher component pricing” as the reason for raising capex. Translation: the bill of materials inside an AI rack is being reshaped, and the parts that aren’t Nvidia GPUs are growing.

Headline 5 — Even OpenAI, the company most levered to Nvidia, is moving to leased compute and bilateral deals across AMD, Broadcom, and CoreWeave. The single-vendor era is fraying at the edges.

The thesis: The chip stack is unbundling. Nvidia still owns ~90% of accelerator share and isn’t losing the centre — but the edges are being eaten by hyperscalers who can finally afford to design their own silicon and by partners (Broadcom, Marvell) who help them do it. That’s a gross-margin story, not a revenue story — Nvidia’s revenue keeps growing, but the price it can charge per accelerator compresses as Trainium, TPU, and MTIA scale into 2027–2028. AVGO is the cleanest play on that compression. So is Marvell.

So yeah. AVGO is up 139% in twelve months for a reason.

Who gets hurt? Three specific names, not vibes. Oracle — the most exposed to Stargate scaling back, with hundreds of billions of off-balance-sheet exposure to a customer that just halved its build-out plans. Super Micro Computer (SMCI) — the integration play whose entire business model is bolting Nvidia GPUs into reference racks; if Meta and Google buy fewer Nvidia systems and more custom-silicon racks, SMCI’s 2027 revenue base shrinks. Arm Holdings (ARM) — the indirect victim; its IP licensing model assumed every hyperscaler would keep paying royalties on Nvidia-adjacent designs, but custom silicon means more in-house architecture and less licensable surface area. Watch all three relative to MSFT and GOOGL through Q2.

WHAT COULD GO WRONG? (THE BEAR CASE)

Custom silicon takes longer than the bulls model. Designing AI chips is hard. Google’s TPU took a decade. Meta’s MTIA is on Gen 2 and still represents a small share of compute. Nvidia’s CUDA software moat is genuine and hyperscalers can’t replicate it overnight. The thesis can be right and still take three years longer to play out than today’s price action implies.

Recession resets the whole AI capex cycle. All of this assumes 2026 capex actually gets spent. If the macro picture deteriorates — Iran tensions, oil above $110, a real earnings recession — the hyperscalers cut capex first and ask questions later. Both AVGO and NVDA fall in that scenario.

Nvidia adapts. They’ve done it before. Networking (Mellanox), software (CUDA), data centre systems (DGX/HGX) — Nvidia keeps moving up the stack. If Blackwell Ultra and Rubin land at the right price-performance ratio, the custom silicon story becomes a sideshow rather than a substitution.

Size your position for the possibility that the thesis takes longer than expected. The directional call is the easy part — the timing is where most theses break.

TWO IDEAS TO RESEARCH THIS WEEKEND

Not recommendations — starting points for your own research.

Idea 1 — Broadcom (AVGO): the obvious one, and that’s the problem.

The case writes itself. That’s also why I almost didn’t include it. When the case writes itself, half the upside is already in the price — and AVGO is up 139% in twelve months. But it stays in the issue because the structural setup is genuinely rare: one company that gets paid no matter which hyperscaler wins.

Why now: Meta confirmed MTIA Gen 2 is in production with Broadcom. Google’s TPU partnership is the largest external silicon contract Broadcom has ever disclosed. Two of the four hyperscalers now run custom silicon roads that lead through AVGO.

The case: Meta picks MTIA — Broadcom inside. Google picks TPU — Broadcom inside. Amazon picks Trainium — partial Broadcom involvement. Plus VMware software at 70%+ margins. You’re not picking which hyperscaler wins; you’re owning the toll booth.

The risk: P/E ~80 at $417. Most of the easy money’s been made. Any earnings miss or capex cut by Meta or Google hits twice — through chip revenue and multiple compression.

Tripwire: If Q3 earnings (early September) show AI semiconductor revenue below $9B annualised — versus the $10B+ trajectory implied by the Google TPU and Meta MTIA disclosures — the custom silicon ramp is slower than the multiple assumes. Re-rate from there.

How to research: Ticker AVGO. ETF exposure via SOXX (~9% weight) or SMH (~6%).

Idea 2 — The Anthropic-adjacent trade: Amazon (AMZN).

Why now: Anthropic is reportedly raising $50B at a $900B valuation — surpassing OpenAI. Amazon committed up to $25B in compute spend with Anthropic and just announced GPT models will also run on Bedrock. AWS now hosts the two largest LLM customers on earth and grew 28% YoY (fastest in 15 quarters).

The case: AMZN is the cleanest public-market proxy for Anthropic’s growth without taking private market dilution risk. Bedrock is the distribution layer. AWS is the compute layer. Amazon owns both and gets paid on tokens regardless of who wins the model war.

The risk: AMZN at ~$235 trades at 35x forward earnings — not cheap. Retail margins still volatile. AWS growth could decelerate if hyperscaler discipline kicks in.

Tripwire: If AWS growth dips below 25% YoY at Q3 earnings (late October) — the first deceleration in 15 quarters — the AI tailwind narrative weakens and multiple compresses. Calendar-checkable.

How to research: Ticker AMZN. For AI-cloud basket exposure: SKYY ETF or KOMP.

AI INVESTMENT FRAMEWORK

Living portfolio framework by layer. Not financial advice — research starting points only.

LAYER

TICKERS

YTD

CONVICTION

RISK

SIZING

Infrastructure

NVDA, AVGO, MU

+14%

HIGH ↑↑

●●●○○

15–20%

Platforms

MSFT, GOOG, AMZN

+18%

HIGH ↑

●●○○○

15–20%

Applications

PLTR, CRM, NOW

+8%

MEDIUM ↔

●●●○○

5–10%

Physical AI

BOTZ, ISRG, FANUC

+11%

MEDIUM ↑

●●●●○

5–10%

Cybersecurity

CRWD, PANW, ZS

+6%

DEVELOPING ↑

●●●○○

5–10%

Global

BABA, 9984.T, SAP

+9%

DEVELOPING ↔

●●●●●

5%

Per-layer notes

Infrastructure — still the core, but composition is shifting underneath. NVDA: still 90% accelerator share, but watch margin pressure into 2027. AVGO: hit all-time high this week; the custom silicon partner across MTIA, TPU, and parts of Trainium. MU: HBM memory is sold out through 2026 — this is the input that drove Meta’s capex hike.

Platforms — the big winner of earnings week. MSFT: AI run rate $37B (+123% YoY); cloud margins still expanding. GOOG: cloud +63% YoY, demand exceeds supply. AMZN: AWS +28%, fastest in 15 quarters; also the Anthropic distribution layer.

Applications — still waiting for the productivity proof. No conviction change.

Physical AI — quietly accelerating. BOTZ at $37.96, ETF flows positive 6 months running. ISRG +7% on Q1 beat (revenue +23% YoY). JAL’s humanoid pilot next month is worth tracking — first paying-customer deployment in Japan.

Cybersecurity — added April 2026, still building. CRWD, PANW, ZS all rangebound. Conviction unchanged — waiting for the next major incident to test thesis.

Global — Watching Japan robotics for paying-customer use cases (JAL pilot launches next month). China remains untouchable for most US allocators. SAP holding firm on European AI agents traction.

What we’re watching (next 2 weeks)

DATE

EVENT

QUESTION TO TRACK

20 May

NVDA Q1 earnings

Does data centre revenue growth show first signs of customer concentration risk?

Mid-May

Anthropic round close

Does the final valuation print at $900B+?

Late May

OpenAI investor update

Has Stargate UK formally been cancelled, or just paused?

Changes this week

Conviction on Infrastructure raised from HIGH to HIGH ↑↑ — the bifurcation between Nvidia and custom silicon makes the layer more interesting, not less.

No tickers added or removed.

Cybersecurity conviction held at DEVELOPING — quiet week, no thesis test.

Disclaimer: This newsletter is for informational and educational purposes only and does not constitute financial advice. iPrompt Signals is not a registered investment advisor. Always conduct your own research and consult a qualified financial professional before making investment decisions.

YOUR MOVE

The week’s three takeaways, traceable to the headlines and ideas above:

The $700B capex story is the wrong story to focus on. The right story is what’s inside the capex — the silicon mix is shifting, and AVGO is up 139% over twelve months because of it.

The named losers this week are Oracle, SMCI, and ARM. Stargate scaling back is a balance-sheet event for Oracle. SMCI’s reference-rack model is exposed if hyperscalers buy more custom silicon. ARM’s licensing surface area shrinks as in-house chip design grows.

NVDA earnings on 20 May is the next test. Watch data centre revenue growth and any commentary on customer concentration. That print will either confirm or break the unbundling thesis for Q2.

Now research one. AVGO is the spine of this week’s thesis — direct ticker exposure, Q3 earnings as the tripwire, ETF wrappers (SOXX, SMH) for sizing. AMZN is the defensive alternative if you’d rather own the Anthropic-adjacent platform play. Pick one and reply with which.

Hit reply and tell me which one you’re digging into. The replies are the part of this job that actually keeps me curious.

🌱 SHORT TAKE (for the broad-exposure reader)

The simplest way to play this week’s bifurcation thesis without picking a single chip company is SOXX (iShares Semiconductor ETF) or SMH (VanEck Semiconductors ETF). Both hold NVDA, AVGO, AMD, MRVL, MU together — so you’re not betting on which chip wins, just that AI silicon spending keeps compounding. Not a recommendation — a starting point.

Stay curious — and stay qualified.

— R. Lauritsen

Editor, iPrompt Signals

Know someone building an AI position? Forward this — they’ll thank you by Friday.

P.S. — Three months ago Anthropic was valued at $380B. Today the talk is $900B. The pace is the story. The compute is the constraint.

QUICK GLOSSARY

ASIC — Application-Specific Integrated Circuit. Think of a Nvidia GPU as a Swiss Army knife — does many things, well. An ASIC is a single razor-sharp blade — does one thing, brilliantly, and can’t do anything else. Trainium, TPU, and MTIA are all ASICs.

Capex — Capital expenditure. Money spent on long-lived assets like data centres, servers, and chips. Doesn’t hit profit immediately — gets depreciated over years. Spending $145B today shows up as $25–30B/year of cost over 2027–2031, not all at once.

Custom silicon — Chips designed by the buyer (Amazon, Google, Meta) instead of bought from Nvidia. Cheaper at scale once you’ve already amortised a billion dollars of design cost — which is why only the hyperscalers can play.

HBM — High Bandwidth Memory. The expensive memory chips stacked on top of every AI accelerator. Currently sold out through 2026, which is part of why capex is rising. Micron and SK Hynix make most of it.

Hyperscaler — The four cloud giants who buy most of the world’s AI chips: Microsoft Azure, Amazon AWS, Google Cloud, Meta (for in-house use). Combined 2026 capex ~$700B — roughly the GDP of Argentina.

MTIA — Meta’s custom AI chip family, second generation now in production with Broadcom.

TPU — Google’s Tensor Processing Unit. Now being sold to external customers — a strategic shift announced this week.

Trainium — Amazon’s custom AI training chip, central to AWS’s margin story.

VIX — Volatility index. Below 17 means traders are calm. Above 25 means real fear is being priced. At 16.89 right now — calmer than the NVDA selloff would suggest.

iPrompt Signals

Published Fridays by FrontWave Media Ltd · Limassol, Cyprus

Your docs are being read by AI. Is yours ready?

Over 50% of traffic across Mintlify's customer base is now AI agents, not humans. If your docs aren't structured for agents, your product is invisible to AI. Mintlify just raised a $45M Series B to build the knowledge layer for the agent era.