> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
There is not a single OpenAI model in the top 10 on openrouter's ranking page. The market is saying something about the comparative value of OpenAI.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
1. openrouter is API usage. There is obviously consumer side
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
> At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
I agree, and most of my peers do as well. This is why most of us shifted to funding AI Applications startups back in 2023-24. Most of these players are still in stealth or aren't household names, but neither are ServiceNow, Salesforce, Palo Alto Networks, Wiz, or Snowflake.
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
That's a pretty lofty valuation for a company that has yet to demonstrate code generation anywhere near Anthropic's models if they're leaning into the engineering angle.
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
So let´s see if I understood well this one:
Got 110 Billions with the promise that either AGI will happen soon (:) or going public before the end of the year.
Eitherway you get to double your 110 Billions no matter what (who will be left to pay the full bill after it, public or public)?
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Remember when it was a huge milestone when gigantic companies like Apple and Microsoft were striving to be the first $1T company backed with decades of building actual businesses with actual profit?
> The Information had previously reported that $35 billion of Amazon’s investment could be contingent on the company either achieving AGI or making its IPO by the end of the year. OpenAI’s announcement confirms the funding split, but says only that the additional $35 billion will arrive “in the coming months when certain conditions are met.”
Incredible.
So basically, Amazon is buying into the IPO at an early price. Maybe this is the time to divest from MSCI world. I don’t want to be the bag holder in the world’s largest pump and dump.
It can both be true at the same time: That AI is going to disrupt our world and that Open AI does not have a business model that supports its valuation.
Tesla is a car company with relatively small, and shrinking, sales, that is worth $1.5T on the promise of [Elons_Promise_of_the_Month]
yea, proving my point that the index funds are maybe not the safest place if you want to invest into real value. And soon, twitter/Grok/spacex might be doing an IPO
You forgot to mention they solved vision based autonomous driving, but I guess that doesn’t matter if Elon = bad
SAE level 2 driver assistance is explicitly not autonomous driving.
Seems not solved:
https://fortune.com/2026/02/26/tesla-robotaxis-4x-8x-worse-t...
Gonna need a citation for that, buck.
It's this kind of dynamic that makes me pull back on my otherwise pretty AI-forward stance. There's an entire community of people who passionately believe it's obvious and undeniable that Elon Musk has solved problems that he has not solved and his companies deliver things they don't deliver. Tesla is absolutely unambiguous in their marketing material (https://www.tesla.com/fsd) that they do not have autonomous driving, but you're far from the first person I've encountered who's been tricked into believing otherwise.
I don't think that's my relationship with AI, I'm hardly an uncritical booster. But would I know if it was?
Huh? They did not "solve" vision based driving.
Of course, but if Elon=great you can ignore that
Hope daddy sees this and gives you that lollipop.
Did it ever occur to you that an entire generation of developers are going to retire in less than 20 years? They are betting that the software industry will be autonomous. Really, think of our industry like AUV phenomena. We’re the drivers that are about to be shown the door, that’s the bet.
World will still need software, lots of it. Their valuation is based on an entire developer-less future world (no labor costs).
Even the rise of high-level languages did not lead to a "developer-less future". What it did was improve productivity and make software cheaper by orders of magnitude; but compiler vendors did not benefit all that much from the shift.
A high-level language or a compiler wasn't automating end-to-end reasoning for a programming task.
[dead]
OpenAI has all the name recognition (which is worth a couple billion in itself), but when it comes to actual business use cases in the here and now Anthropic seems ahead. Even more so if we are talking about software dev. But they are valued at less than half of OpenAI's valuation
What is somewhat justifying OpenAI's valuation is that they are still trying for AGI. They are not just working on models that work here and now, they are still approaching "simulating worlds" from all kinds of angles (vision, image generation, video generation, world generation), presumably in hopes that this will at some point coalesce in a model with much better understanding of our world and its agency in it. If this comes to pass OpenAI's value is near unlimited. If it doesn't, its value is at best half what it is today
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
And that's the dealbreaker for me since they've been so adamant on scaling taking them there, while we're all seeing how it's been diminishing returns for a while.
I was worried a few years back with the overwhelming buzz, but my 2017 blogpost is still holding strong. To be fair it did point to ASI where valuation is indeed unlimited, but nowadays the definition of AGI is quite weakened in comparison.. but does that then convey an unlimited valuation?
Obligatory reminder that today's so called "AGI" has trouble figuring out whether I should walk or drive to the car wash in order to get my dirty car washed. It has to think through the scenario step by step, whereas any human can instantly grok the right answer.
The idea/hope is that a video model would answer the car wash problem correctly. There are exactly the kinds of issues you have to solve to avoid teleporting objects around in a video, so whenever we manage more than a couple seconds of coherent video we will have something that understands the real world much better than text-based models. Then we "just" have to somehow make a combined model that has this kind of understanding and can write text and make tool calls
Yes, this is kind of like Tesla promising full self driving in 2016
What are you talking about? OpenAI's ChatGPT free tier (that everyone uses) answers this in the first sentence within a couple seconds.
"If your goal is to get your dirty car washed… you should probably drive it to the car wash "
That problem went viral weeks ago so is no longer a valid test. At the time it was consistently tripping up all the SOTA models at least 50% of the time (you also have to use a sample > 1 given huge variation from even the exact same wording for each attempt).
The large hosted model providers always "fix" these issues as best as they can after they become popular. It's a consistent pattern repeated many times now, benefitting from this exact scenario seemingly "debunking" it well after the fact. Often the original behavior can be replicated after finding sufficient distance of modified wording/numbers/etc from the original prompt.
For example, I just asked ChatGPT "The boat wash is 50 meters down the street. Should I drive, sail, or walk there to get my yacht detailed?" and it recommended walking. I'm sure with a tiny bit more effort, OpenAI could patch it to the point where it's a lot harder to confuse with this specific flavor of problem, but it doesn't alter the overall shape.
This question is obviously ambiguous. The context here on HN includes "questions LLMs are stupid about, I mention boat wash, clearly you should take the boat to the boat wash."
But this question posed to humans is plenty ambiguous because it doesn't specify whether you need to get to the boat or not, and whether or not the boat is at the wash already. ChatGPT Free Tier handles the ambiguity, note the finishing remark:
"If the boat wash is 50 meters down the street…
Drive? By the time you start the engine, you’re already there.
Sail? Unless there’s a canal running down your street, that’s going to be a very short and very awkward voyage.
Walk? You’ll be there in about 40 seconds.
The obvious winner is walk — unless this is a trick question and your yacht is currently parked in your living room.
If your yacht is already in the water and the wash is dock-accessible, then you’d idle it over. But if you’re just going there to arrange detailing, definitely walk."
I don't understand what occasional hiccups prove. The models can pass college acceptance tests in advanced educational topics better than 99% of the human population, and because they occasionally have a shortcoming, it means they're worse than humans somehow? Those edge cases are quickly going from 1% -> 0.01% too...
"any human can instantly grok the right answer."
When asking a human about general world knowledge, they don't have the generality to give good answers for 90% of it. Even very basic questions humans like this, humans will trip up on many many more than the frontier LLMs.
I just don't know how to engage with these criticisms anymore. Do you not see how increasingly convoluted the "simple question LLMs can't answer" bar has gotten since 2022? Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
> Do the human beings you know not have occasional brain farts where they recommend dumb things that don't make much sense?
Not that dumb, no. That's why it's laughable to claim that LLMs are intelligent.
> What is somewhat justifying OpenAI's valuation is that they are still trying for AGI.
"AGI" is the IPO.
> If this comes to pass OpenAI's value is near unlimited.
How?
If we have AGI, we have a scenario where human knowledge-based value creation as we know it is suddenly worthless. It's not a stretch to imagine that human labor-based value creation wouldn't be far behind. Altman himself has said that it would break capitalism.
This isn't a value proposition for a business, it's an end of value proposition for society. The only people who find real value in that are people who spend far too much time online doing things like arguing about Roko's Basilisk - which is just Pascal's Wager with GPUs - and people who are so wealthy that they've been disconnected with real-world consequences.
The only reason anyone sees value in this is because the second group of people think it'll serve their self-concept as the best and brightest humanity has ever had to offer. They're confusing ego with ability to create economic value.
"End of human-based value creation" is tantamount to post-scarcity. It "breaks" capitalism because it supposedly obviates the resource allocation problem that the free-market economy is the answer to. It's what Karl Marx actually pointed to as his utopian "fully realized communism". Most people would think of that as a pipe dream, but if you actually think it's viable, why wouldn't you want it?
It can both be true that
a) AI is going to replace a Bazillion-Dollar Industry and that
b) being an AI model provider does not allow to capture margins above 5% long-term
I am not saying that this is what will happen, but it's a plausible scenario. Without farmers we would all be dead but that does not mean the they capture monopoly rents on their assets.
But Anthropic is the one that is disrupting software development? So why are we not piling into that?
Exactly, the dot com bubble didn't mean that the internet was just a fad.
I'm curious how they define AGI technically. Seems like you would want that to be a tight definition.
Didn't they already define it as "a system capable of generating at least $100 billion in profit"
It just needs to be anything that will force OpenAI to IPO.
I'd love to know how they define AGI.
They've previously defined AGI as an AI that can directly create $100B in economic value.
Hmm interesting, thanks. I wonder how much value it's already created.
Altman Gets Investment?
Hopefully Microsoft is selling parts of their share of this trash into these funding rounds...
Circular-breathing causes the air to heat up, causing expansion. This is how a balloon can expand even when someone is breathing air from inside it.
s/breathing/investment/g s/balloon/bubble/g s/air/money/g
I performed the suggested substitution. What is the heating up of money in that analogy?
Sarcastically, it's "the vibes intensifying".
(Vibes ~ Vibrations ~ Heat)
Tbf it's a reasonable question... I think it's a little tricky to pin down the equivalent of "kinetic energy" in purely economic terms, though you might look at the rate of flow of money as some analogy for the speed/energy of particles (speed of individual dollars changing hands). In that sense, the more frequent and larger these deals get, the hotter the market is. This is not a novel analogy.
There is not a single OpenAI model in the top 10 on openrouter's ranking page. The market is saying something about the comparative value of OpenAI.
Edit: yes, it is true that many people do integrate directly with OpenAI. That doesn't negate the fact that Openrouter users are largely not using OpenAI.
Methodology problems aside, do we have any idea how big OpenRouter is as compared to the big providers?
OpenRouter claims "5M+" users; OpenAI is claiming >900M weekly active users.
I don't really think it's possible to learn anything about the broader market by looking at the OpenRouter model rankings.
1. openrouter is API usage. There is obviously consumer side
2. people often use openrouter for the sole purpose of using a unified chat completions API
3. OpenAI invented chat completions; if you use openrouter for chat completions often you can just switch your endpoint URL to point to the OAI endpoint to avoid the openrouter surcharge!
4. Hence anyone with large enough volume will very likely not use openrouter for OpenAI; there is an active incentive to take the easy route of changing the endpoint URL to OAI’s
> The market is saying something about the comparative value of OpenAI.
Is it?
At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
That day will come. Not everyone needs a Ferrari.
Edit: I misread the parent, I think they're saying the same thing.
Model rankings are irrelevant. No one cares.
The differentiating factor will be access to proprietary training data. Everyone can scrape the public web and use that to train an LLM. The frontier companies are spending a fortune to buy exclusive licenses to private data sources, and even hiring expert humans specifically to create new training data on priority topics.
Including paying poets and other experts by the hour to improve the models https://conversationswithtyler.com/episodes/brendan-foody/
> At what point are the models going to all be "good enough", with the differentiating factor being everything else, other than model ranking?
It's already come for vast swathes of industries.
Most organizations have already been able to operationalize what are essentially GPT4 and GPT5 wrappers for standard enterprise usecases such as network security (eg. Horizon3) and internal knowledge discovery and synthesis (eg. GleanAI back in 2024-25).
Yes, and that is why I used the phrase comparative value. The concept of winning business based on being #1 on the benchmarks is dead.
I agree, and most of my peers do as well. This is why most of us shifted to funding AI Applications startups back in 2023-24. Most of these players are still in stealth or aren't household names, but neither are ServiceNow, Salesforce, Palo Alto Networks, Wiz, or Snowflake.
Foundation Models have reached a relative plateau and much of the recent hype wasn't due to enhanced model performance but smart packaging on top of existing capabilities to solve business outcomes (eg. OpenClaw, Antheopic's business suite, etc).
Most foundation model rounds are essentially growth equity rounds (not venture capital) to finance infra/DC buildouts to scale out delivery or custom ASICs to enhance operating margins.
This isn't a bad thing - it means AI in the colloquial definition has matured to the point that it has become reality.
This could just be because everyone is using direct OpenAI api keys when using OpenAI.
Or, their customers integrate with them directly.
Sample bias.
If OpenAI is Pied Piper, who is Russ Hanneman in all this?
That's a pretty lofty valuation for a company that has yet to demonstrate code generation anywhere near Anthropic's models if they're leaning into the engineering angle.
Many engineers use Codex 5.3 and find it better, including Hashicorp's Mitchell.
Does anyone have any ethical concerns using openai regarding money donated to the current US administration in one way or another? I will search for more accurate details about that situation. I know about several other ethical concerns with openai that people have, including copyright and other considerations regarding the work being trained on, as well as lack of action regarding users who are harmed by their usage of the product, often regarding mental health, environmental concerns, actually quite a few others, but I am interested if many people think their political donations are an issue or not.
So let´s see if I understood well this one: Got 110 Billions with the promise that either AGI will happen soon (:) or going public before the end of the year. Eitherway you get to double your 110 Billions no matter what (who will be left to pay the full bill after it, public or public)?
Very interesting, I will follow it closely, mostly to see how you ROI 110 Billions in a couple of years.
Only $730B? Why stop there? As long as we're making stuff up, let's go big. What about $10T?
They have to save the big T for IPO.
On a tangent, I remember companies like Slack triggering the unicorn craze. They said that it was just better to aim for a billion than some number like 900M or 1.2B, because psychologically, it meant more to employees, investors, and customers.
OpenAI is in that place where nobody really cares for these mind games. It's not very reliable. But it is useful enough to pay for. It's cheap enough to be an impulse purchase where some guy decides to just subscribe to ChatGPT because they're working on an important slide or sketching a logo.
Rookie numbers, I say $100T. Go big or go home.
https://paintraincomic.com/comic/first-date/
Remember when it was a huge milestone when gigantic companies like Apple and Microsoft were striving to be the first $1T company backed with decades of building actual businesses with actual profit?
Good times.
It’s Tesla only big tech are the suckers.
730 Billion certainly is a bubble that will pop sooner or later.
Source: https://openai.com/index/scaling-ai-for-everyone/ (https://news.ycombinator.com/item?id=47180302)
I thought with OpenClaw they'd get more than a 3.67x multiplier of what Anthropic raised.