When I first started playing with LLMs, it felt like I was looking at the future. I could type a few sentences and this thing would give me back code, ideas, even product specs. It was intoxicating. I caught myself thinking: this is the innovation. This is the product.
It took me a while to realize I was wrong.
LLMs are not the product. They’re the new CPU.
Back when CPUs first came out, they were the miracle. Everyone obsessed over clock speeds and transistor counts. You could actually sell hardware on the basis of “we have a faster CPU.” Now? No one cares. CPUs are just part of the stack. You only think about them when they fail. The innovation moved up — into the software, the user experience, the things people actually touch.
The same thing is happening with LLMs. Right now, they’re still shiny. Companies raise money on “we’re building with AI.” People launch apps that are basically a thin wrapper over a model. And for a short while, that can work — because the novelty is doing the heavy lifting.
But novelty fades.
When LLMs become as boring as CPUs, the game changes. The magic will no longer be in calling an API and getting a paragraph of text back. It’ll be in how you orchestrate that capability, how you integrate it into a workflow, how you wrap it in context and guardrails so that it actually solves a problem.
Think about it: no one brags that their app uses HTTP requests. Or that it runs on electricity. Those are invisible parts of the system. LLMs will go there too.
So if you’re building something today, stop pitching “we’re an AI app.” Start asking: what happens when the AI part is taken for granted? What will still make your product valuable? That’s the thing you need to double down on.
The sooner you stop seeing LLMs as magic, the sooner you can start building something that will matter when the magic is gone.
And sure — if AGI shows up, all bets are off. But that’s a different story.