"If your strategy is “use OpenAI’s API and wrap a UI around it,” you don’t have a company, you have an expensive demo that can be cloned overnight."
So much this.
"Here’s the brutal truth: Inference costs are the new AWS bill."
And this.
"Let’s be blunt: most AI startups right now aren’t profitable, even if they look like they’re growing. They’re subsidizing user adoption with VC dollars while ignoring the economics."
And this.
There are so many spot-on observations in this assessment of AI startups.
Now, do the math. Expand these observations linearly and ask: How does the endgame look? The picture we'd be looking at would be grim. Unless we can predict that the economy behind AI capabilities changes *fundamentally*, the viability is not there.
Even the old "grow, grow, grow" tactic won't do, as the operations are suddenly costly. It's the new infrastructure bill, but this time is an order of magnitude (or two) higher.
I think Duolingo is not a good example of failing to adopt to AI? Did they get negative press for layoffs, yes. Did the stock price almost 4x since ChatGPTs launch, yes!
Really solid framework. What stood out to me is how it balances profitability with defensibility, two things many AI startups overlook in the rush to ship. The reminder that retention and differentiation are as critical as raw capabilities is spot on.
"You don't control Pricing. You don't control Performance".
To be fair, that's not a downside for LLM-wrappers only.
The same threat is possed by relying on hyperscalers for cloud infra: if AWS raises API rates, your margins plummets; it AWS throttles performance, the same.
I appreciate the effort here, but honestly, 7,500 words felt like a real slog for what could've hit hard at 2,000. The core insights are genuinely sharp: AI economics don't map onto SaaS at all, commoditization happens faster than founders can even react, and inference costs will absolutely kill you if you don't engineer around them from day one.
That said, some of this framing could apply to any startup. "Expensive demos versus real companies" is pretty much a universal trap. What makes AI different is how brutally compressed everything gets: features get absorbed by foundation models in quarters instead of years, and scaling actually increases your costs instead of decreasing them. That part really is new and terrifying.
What struck me most was how close the piece comes to naming the deeper problem. The analysis catalogs every single symptom perfectly: shattered moats, runaway costs, vanishing differentiation. But then it stops at calling these "strategy challenges." The more radical reading? Intelligence as a utility might just be fundamentally incompatible with venture economics. If abundance breaks artificial scarcity, no amount of clever frameworks can fix that.
Plus, the piece treats OpenAI as an "API dependency" to escape from, but misses that they're actively building the rails everyone runs on. It's not just commoditization; it's platform capture.
Still, it's a valuable wake-up call for founders. I just wish the conclusion had the guts to question not just founder tactics, but whether the whole system still makes sense.
The hard truth is that having a PhD in AI coding doesn't cut the mustard. You need to think cost throughout your business modelling, most especially inference cost once you scale up. You could scale up overnight and so does your inference costs. Even giant's like AWS are scratching their heads over this.
"If your strategy is “use OpenAI’s API and wrap a UI around it,” you don’t have a company, you have an expensive demo that can be cloned overnight."
So much this.
"Here’s the brutal truth: Inference costs are the new AWS bill."
And this.
"Let’s be blunt: most AI startups right now aren’t profitable, even if they look like they’re growing. They’re subsidizing user adoption with VC dollars while ignoring the economics."
And this.
There are so many spot-on observations in this assessment of AI startups.
Now, do the math. Expand these observations linearly and ask: How does the endgame look? The picture we'd be looking at would be grim. Unless we can predict that the economy behind AI capabilities changes *fundamentally*, the viability is not there.
Even the old "grow, grow, grow" tactic won't do, as the operations are suddenly costly. It's the new infrastructure bill, but this time is an order of magnitude (or two) higher.
thats a good strategy we can work on it
Amazing read, appreciated your cost structure comparative analysis between traditional saas and AI powered saas.
Thanks Tobi
I think Duolingo is not a good example of failing to adopt to AI? Did they get negative press for layoffs, yes. Did the stock price almost 4x since ChatGPTs launch, yes!
To further add to my point: You say user churned. Might be true. But Daily Active Users of Duolingo grew from 13.2m in Q2 22 to 47.7m in Q2 25.
Really solid framework. What stood out to me is how it balances profitability with defensibility, two things many AI startups overlook in the rush to ship. The reminder that retention and differentiation are as critical as raw capabilities is spot on.
Excellent post! Will come back to this again and again.
"You don't control Pricing. You don't control Performance".
To be fair, that's not a downside for LLM-wrappers only.
The same threat is possed by relying on hyperscalers for cloud infra: if AWS raises API rates, your margins plummets; it AWS throttles performance, the same.
++ Good Post. Also, start here : 500+ LLM, AI Agents, RAG, ML System Design Case Studies, 300+ Implemented Projects, Research papers in detail
https://open.substack.com/pub/naina0405/p/very-important-llm-system-design?r=14q3sp&utm_campaign=post&utm_medium=web&showWelcomeOnShare=false
Thank you Miqdad for sharing,
Probably the best, to the point, no BS, conclusive article on AI strategies.
I appreciate the effort here, but honestly, 7,500 words felt like a real slog for what could've hit hard at 2,000. The core insights are genuinely sharp: AI economics don't map onto SaaS at all, commoditization happens faster than founders can even react, and inference costs will absolutely kill you if you don't engineer around them from day one.
That said, some of this framing could apply to any startup. "Expensive demos versus real companies" is pretty much a universal trap. What makes AI different is how brutally compressed everything gets: features get absorbed by foundation models in quarters instead of years, and scaling actually increases your costs instead of decreasing them. That part really is new and terrifying.
What struck me most was how close the piece comes to naming the deeper problem. The analysis catalogs every single symptom perfectly: shattered moats, runaway costs, vanishing differentiation. But then it stops at calling these "strategy challenges." The more radical reading? Intelligence as a utility might just be fundamentally incompatible with venture economics. If abundance breaks artificial scarcity, no amount of clever frameworks can fix that.
Plus, the piece treats OpenAI as an "API dependency" to escape from, but misses that they're actively building the rails everyone runs on. It's not just commoditization; it's platform capture.
Still, it's a valuable wake-up call for founders. I just wish the conclusion had the guts to question not just founder tactics, but whether the whole system still makes sense.
The hard truth is that having a PhD in AI coding doesn't cut the mustard. You need to think cost throughout your business modelling, most especially inference cost once you scale up. You could scale up overnight and so does your inference costs. Even giant's like AWS are scratching their heads over this.