We talk about AI alignment risk and AI misuse risk. We’re missing a third category: AI supply chain risk.

AI alignment — ensuring AI systems reliably do what we intend — is probabilistic, not solved. Current techniques reduce the chance of harmful behavior but can’t guarantee zero risk. Safety is a spectrum, not a switch.

So what keeps things in check? A practical backstop: AI depends on infrastructure it doesn’t control. Chips, operating systems, cloud platforms, energy grids — all built and operated by humans. Even if alignment isn’t perfect, we control the infrastructure AI runs on. We can audit it, constrain it, correct it, or shut it down. That’s the containment layer.

Now look at who is under the most pressure to aggressively integrate AI into their development processes. Not random startups. The companies that build AI’s own infrastructure:

  • Chip designers using AI to accelerate semiconductor design
  • OS makers embedding AI into kernel development and firmware
  • Cloud providers using AI to optimize the infrastructure AI runs on

Each decision is rational. The companies doing this — Nvidia, TSMC, Amazon, Microsoft, Apple, Google, Samsung — aren’t reckless. They’re responding to real market pressure.

But in aggregate, AI becomes a supplier in its own supply chain. The infrastructure meant to contain AI is increasingly built with AI’s help.

And the pressure doesn’t stop at adoption. Humans in the loop slow iteration cycles — exactly what AI was adopted to accelerate. The expertise to review AI-generated chip designs or kernel code is rare and expensive. So the incentive is to progressively thin out human verification, because that layer is the bottleneck.

The Circular Dependency

In any critical supply chain, independence between layers is what makes verification possible. You don’t let a supplier audit themselves.

But that’s the dependency we’re creating. If AI helps design a chip, write OS code, and optimize a compiler — and we use that chip, OS, and compiler to run, evaluate, and constrain AI — we have a circular dependency. The tooling we’d use to verify AI was itself produced by AI.

This doesn’t require malice or some dramatic breakout scenario. It’s a supply chain integrity problem — the same kind we already know how to worry about, applied to a supplier we’re not used to thinking of as one.

What’s Missing

We have mature disciplines for hardware supply chain security. We have growing frameworks for AI alignment. What we don’t have is a framework for when AI is both the product and the supplier.

For engineering leaders: this doesn’t mean slow down. It means think about where AI appears in your dependency chain — not just as a tool, but as a supplier whose outputs you build on.