technologyneutral

Google Adds Marvell to Its AI Chip Team for Faster Inference

USASunday, April 19, 2026

Google is negotiating with Marvell Technology to add two new AI processors to its custom silicon portfolio:

  • Memory‑Processing Unit (MPU) – Works alongside Google’s existing Tensor Processing Units (TPUs) to accelerate memory‑bound workloads.
  • Inference‑Only TPU – Dedicated solely to inference, the part of AI that answers user requests rather than training models.

Marvell would design these chips similarly to how MediaTek helped craft Google’s latest Ironwood TPU.

Multi‑Supplier Strategy

These talks come right after Broadcom secured a long‑term supply agreement for TPUs and networking parts through 2031. Instead of replacing Broadcom, Google is building a multi‑supplier ecosystem:

Supplier Role
Broadcom High‑performance chips
MediaTek Cost‑effective variants
Marvell New inference‑optimized units
TSMC Foundry manufacturing

The approach mirrors automotive supply chains, where multiple vendors prevent over‑reliance on a single source.

Focus on Inference

Training large AI models is a one‑time, compute‑heavy event that can last weeks or months. Inference, however, runs continuously—handling every user query and scaling with demand. Because billions of users rely on AI services daily, even a modest cost reduction per inference can translate into substantial savings for Google.

Existing and Upcoming Chips

  • Ironwood TPU – Launched this month, offering tenfold performance over its predecessor and suitable for massive superpods.
  • Marvell’s Chips – Expected to target different workloads or price points, adding flexibility to Google’s strategy.

Marvell’s Growing Influence

  • Built processors for Amazon, Microsoft, and Meta in 2025.
  • Holds a $1.5 billion run‑rate in custom silicon design.
  • Benefited from Nvidia’s investments and a $5.5 billion acquisition of Celestial AI.
  • Projected to capture ~25% market share in custom AI chips by 2027.

Market Landscape

  • Broadcom dominates the custom AI accelerator market, controlling >70% of share and targeting $100 billion in revenue by 2027.
  • Google’s diversified supplier base—Broadcom, MediaTek, Marvell, and TSMC—reduces risk and maintains control over its silicon roadmap.

Bottom Line

Google’s partnership with Marvell underscores a strategic shift toward inference‑optimized hardware, enabling more efficient handling of billions of AI requests while keeping its supply chain robust and flexible.

Actions