Great post! It is nice to see you are willing to explain why OpenAI is not as profitable as one may naively assume despite them having commissioned Epoch AI in the past.
Do we know *how* much money Microsoft shares to OpenAI on the former's end? Apparently according to The Tech Buzz, OpenAI subtracts how much they receive from Microsoft from the amount they send the other way round. If so that would imply that the net revenue share is negative. I don't know more details on this though
If GPT-5 was used as a base model for training GPT-5.2, or the R&D for GPT-5 was essential for training GPT-5.2, would this change the calculus? My thought is that the profits from that R&D might actually be bigger than what you calculate.
This 48% gross margin on GPT-5 is wild when you put it against the lifecycle. I've been running Wiz (my autonomous agent) for a couple months and hit this exact wall in miniature - burned through €50 in a week using Opus for everything.
Switched to Haiku for 80% of tasks. Cost dropped 15x. Quality barely changed.
The insight from your analysis - models as rapidly-depreciating infrastructure - matches what I'm seeing. The window to extract value before the next model drops is incredibly short.
If profitability depends on accelerating job displacement, then investors aren’t just betting on AI, they’re betting that social stability will survive large-scale labor erosion. History suggests that’s not a trivial assumption.
It may be naive to think so, but one can simply view a company as an entity. (OpenAI isn't simple because of its peculiar ownership structure). In that case, profit is simply income minus expenditure. Press coverage suggests that OpenAI is "worth $500billion". That is, owning the company should be as desirable as owning, say, 5000 tons of gold, or all the houses in San Francisco, or something (such as US Treasury bonds) returning a guaranteed income of $25billion (say $100 per adult in the USA) a year ($1000 a second) for ever. Is OpenAI really that desirable?
Great analysis. Infrastructure costs will drop, but that alone won’t make AI companies profitable. From a product strategy standpoint, the real lever is how efficiently models scale over time.
Profitability improves when products are designed so the cost per use keeps falling while value compounds, through reuse, recurring workflows (like ChatGPT Enterprise’s custom GPTs), and longer model lifetimes. Equally important is extending model relevance: if models get replaced too quickly, even efficient scaling can’t fully recoup R&D, so designing for longevity and upgrade paths is key.
Interesting insights. Is the cost of energy factored into the cost of compute ? That’s an important question to answer to determine profitability, now and in the future when energy arbitrage drives prices.
Great post! It is nice to see you are willing to explain why OpenAI is not as profitable as one may naively assume despite them having commissioned Epoch AI in the past.
Do we know *how* much money Microsoft shares to OpenAI on the former's end? Apparently according to The Tech Buzz, OpenAI subtracts how much they receive from Microsoft from the amount they send the other way round. If so that would imply that the net revenue share is negative. I don't know more details on this though
https://www.techbuzz.ai/articles/leaked-docs-reveal-openai-s-865m-payment-to-microsoft-in-2025
Super interesting.
If GPT-5 was used as a base model for training GPT-5.2, or the R&D for GPT-5 was essential for training GPT-5.2, would this change the calculus? My thought is that the profits from that R&D might actually be bigger than what you calculate.
I added a footnote addressing the case where we also account for GPT-5.2.
We have now updated these numbers! Read more here: https://x.com/Jsevillamol/status/2029994426072756229?s=20
This 48% gross margin on GPT-5 is wild when you put it against the lifecycle. I've been running Wiz (my autonomous agent) for a couple months and hit this exact wall in miniature - burned through €50 in a week using Opus for everything.
Switched to Haiku for 80% of tasks. Cost dropped 15x. Quality barely changed.
Wrote about what actually works for model selection: https://thoughts.jock.pl/p/claude-model-optimization-opus-haiku-ai-agent-costs-2026
The insight from your analysis - models as rapidly-depreciating infrastructure - matches what I'm seeing. The window to extract value before the next model drops is incredibly short.
If profitability depends on accelerating job displacement, then investors aren’t just betting on AI, they’re betting that social stability will survive large-scale labor erosion. History suggests that’s not a trivial assumption.
It may be naive to think so, but one can simply view a company as an entity. (OpenAI isn't simple because of its peculiar ownership structure). In that case, profit is simply income minus expenditure. Press coverage suggests that OpenAI is "worth $500billion". That is, owning the company should be as desirable as owning, say, 5000 tons of gold, or all the houses in San Francisco, or something (such as US Treasury bonds) returning a guaranteed income of $25billion (say $100 per adult in the USA) a year ($1000 a second) for ever. Is OpenAI really that desirable?
Great analysis. Infrastructure costs will drop, but that alone won’t make AI companies profitable. From a product strategy standpoint, the real lever is how efficiently models scale over time.
Profitability improves when products are designed so the cost per use keeps falling while value compounds, through reuse, recurring workflows (like ChatGPT Enterprise’s custom GPTs), and longer model lifetimes. Equally important is extending model relevance: if models get replaced too quickly, even efficient scaling can’t fully recoup R&D, so designing for longevity and upgrade paths is key.
Interesting insights. Is the cost of energy factored into the cost of compute ? That’s an important question to answer to determine profitability, now and in the future when energy arbitrage drives prices.